40 Questions Education Should Ask Before Trusting AI
Your institution's assessment systems were built to measure human thinking, not to detect machine output. The questions you ask about AI now will determine whether your graduates have genuine capability or just credentials that look the same.
These are suggestions. Use the ones that fit your situation.
1When Turnitin AI flags a submission as potentially AI-generated, what evidence do you have that the flagging system itself does not produce false positives on student work that is simply well-written?
2If you ban ChatGPT use in assessments, how will you distinguish between a student who used it to check their thinking versus a student who used it to replace their thinking?
3Your current rubrics reward clear writing and correct answers. Which of these rewards the struggle that actually builds intellectual capacity in your students?
4When a student submits work written with Khanmigo's help, can your marking scheme tell you whether they understood the maths or learned to prompt the AI effectively?
5If you move to open-book, AI-permitted assessments, what prevents students from outsourcing all thinking to the tool rather than using it as a thinking aid?
6Your institution awards a degree in engineering or literature or nursing. What specific capability must a graduate demonstrate that cannot be faked with AI assistance in the exam?
7When you look at a piece of student work, what questions would you ask the student to expose whether they could do it without AI?
8Does your plagiarism policy currently have a definition of what counts as plagiarism when the copied text was generated by a machine rather than written by another human?
9If your institution permits AI use in assignments, how do you explain to employers that your graduates have the skills the degree claims they have?
10What assessment task in your current curriculum cannot be completed by ChatGPT to a passing standard?
Teaching and Learning Design
11Your students can ask Gemini or ChatGPT any question and get an answer in seconds. What is the pedagogical purpose of the questions you still ask in class?
12When you teach critical thinking, are you teaching students to evaluate AI outputs, or teaching them to think without AI to begin with?
13Duolingo AI now gives instant corrections on language exercises. How will your language students develop the error-correction ability they need when the app is not available?
14If students use ChatGPT to generate essay outlines before writing, at what point does the outline become the work and the essay becomes decoration around it?
15Your lecturers spent years developing their ability to explain difficult concepts clearly. What changes when students can ask an AI to explain the same concept in 10 different ways?
16When you design a practical or laboratory exercise, how essential is it that students struggle through the problem-solving themselves rather than having an AI suggest the next step?
17Khanmigo can tutor students one-to-one in maths. What is your teacher's role if not to explain maths?
18If your curriculum assumes students will independently research topics, what happens when AI summarises that research without the student reading the original sources?
19Your students need to write reports, essays, and analyses across multiple subjects. Which of these requires the student to generate original language, and which just requires them to express someone else's thinking?
20When a first-year student can offload their hardest thinking task to an AI tool, what struggle-based learning are they missing in that moment?
Curriculum and Institutional Purpose
21Your institution's mission statement says you develop graduates who can think critically and solve problems. Does your current curriculum test whether they can do these things without AI assistance?
22If you implement AI-literacy training for students, are you training them to use AI as a tool, or training them to become dependent on AI while believing they are still thinking?
23Your graduates will compete for jobs. What capability will they have developed that an AI tool cannot replicate, and how does your curriculum ensure they develop it?
24When you update your curriculum to include AI literacy, what existing content or skill are you removing to make space for it?
25If your institution publishes learning outcomes for graduates (engineers who can design, nurses who can diagnose, teachers who can explain), does your curriculum ensure students develop these without AI doing the work for them?
26Your lecturers and teachers were trained to mark work, give feedback, and diagnose where students went wrong. Does AI-generated student work still allow them to do this?
27When curriculum designers choose between a learning activity where students struggle through a problem and an activity where AI does it faster, what criteria should they use?
28Your institution offers accreditation in a specific field. Do your accrediting bodies recognise graduates trained with significant AI assistance as meeting the same standards as graduates trained without it?
29If students in your business school can use ChatGPT to write case-study analyses, how is that different from hiring a business consultant instead of developing their own judgement?
30What intellectual capability is essential to your discipline that your students must develop themselves, and what can reasonably be delegated to an AI tool?
Evidence and Monitoring
31You have data on how your graduates perform in employment after leaving your institution. Are you tracking whether graduates who used AI tools during study perform differently in their first role?
32When Turnitin AI identifies a submission as potentially AI-written, what is your process for checking whether the flag is accurate before taking action?
33Your institution has taught a cohort of students with widespread access to ChatGPT. What evidence tells you whether they developed the same capability as previous cohorts without that access?
34If you allow AI use in some assessments but not others, can you measure whether students actually understand the difference, or do they treat all assessments as AI-permitted?
35Your teachers report that students seem less able to explain their own work than in previous years. Have you investigated whether this correlates with AI tool use?
36When a student passes an assessment, what evidence confirms that they developed the capability the assessment was designed to test?
37You know which AI tools your students have access to. Do you track how they actually use them in their learning, or only guess based on submission quality?
38Your institution permits ChatGPT use as a brainstorming tool. How would you know if a student is using it as a brainstorming tool versus using it to generate all their ideas?
39If you implemented a new assessment method to detect whether students understand their own work, what would that method look like, and how would you know it actually works?
40Your graduation ceremony recognises the achievement of your graduates. What is your evidence that their achievement reflects the capability your institution claims they have developed?
How to use these questions
Ask your lecturers and teachers what they notice about student thinking now versus two years ago. Their observations often precede your data.
When you see a well-written student submission, ask the student to explain one part of it out loud. The gap between written quality and spoken ability reveals how much the AI contributed.
Design at least one assessment per programme where the student must explain or defend their work verbally. AI cannot take that examination for them.
Distinguish between AI literacy (how to use tools) and critical thinking (how to judge when use is appropriate). One is a skill; the other is judgement.
Before updating your assessment policy for AI, ask your employers whether they notice any change in the capability of your graduates. Your policy should align with their actual needs.