For the Education Sector

The Most Common AI Mistakes Education Makes

Schools and universities are treating AI as a policing problem rather than a learning design problem, leading to assessment systems that fail to detect real learning gaps. This leaves graduates with credentials they have not earned through the cognitive struggle that builds actual capability.

These are observations, not criticism. Recognising the pattern is the first step.

Download printable PDF

Assessment and Integrity Mistakes

Schools assume that Turnitin AI flags will catch cheating, but the tool misses students who prompt ChatGPT for explanations and then use those explanations in their own writing. The real problem is that the student never had to think through the problem themselves.

The fix

Design assessments that require showing working, reasoning steps, or responses to follow-up questions that reveal whether the student understands the concept or just copied an answer.

Traditional essays test whether a student can write persuasively and structure an argument, but ChatGPT writes persuasively and structures arguments well. An essay assignment no longer measures what you intended to measure.

The fix

Shift to assessments that require live explanation, oral defence of written work, problem solving with novel data, or annotated work showing the thinking process.

Teacherss use Khanmigo as a substitute for direct instruction on hard topics, trusting that the AI tutoring is equivalent to teacher guidance. Students can receive a correct explanation without developing the ability to ask for help or persist when confused.

The fix

Use Khanmigo to supplement teacher explanation, not replace it. Track which students repeatedly ask Khanmigo the same question type and teach those students the underlying skill directly.

Many coursework assignments ask students to summarise, compare, or explain concepts. Google Gemini does this well. An assignment completed entirely by AI produces no evidence that the student learned anything.

The fix

Include assessment tasks where the student must apply knowledge to a new situation, defend a choice, explain an error, or build something from scratch that Gemini cannot generate for them.

Schools block ChatGPT access or forbid its use, but students who encounter AI tools in work and university will not know how to verify what the tools produce. A ban creates a skills gap, not a safety boundary.

The fix

Teach students to test AI output against trusted sources, recognise when AI outputs plausible but false information, and understand what tasks AI can and cannot do reliably.

Teaching and Learning Design Mistakes

Teacherss ask students fewer questions, provide AI-generated answers more quickly, and assume faster answers mean faster learning. Research shows that struggle and productive failure build stronger memory and understanding than instant solutions.

The fix

Use AI to provide hints or partial answers after students have attempted the problem, not before. Ask students to explain why an AI answer is wrong or incomplete rather than accepting it as final.

Critical thinking programmes focus on logic, bias, and evidence, but do not teach students to spot hallucinations, outdated information, or biased training data in AI text. Students graduate thinking they can evaluate information when they cannot evaluate machine-generated information.

The fix

Include case studies where ChatGPT, Gemini, or Duolingo AI produce confidently wrong answers. Have students practise checking AI claims against primary sources and catching the specific errors the AI makes.

Duolingo AI focuses on vocabulary and sentence construction but cannot assess pronunciation, fluency, or understanding in real conversation. Schools assume completion of Duolingo lessons means language competence.

The fix

Use Duolingo AI as a homework supplement only. Assess actual language ability through live conversation, recorded speech, or reading comprehension of authentic materials.

Teacherss assume that because students can ask ChatGPT what photosynthesis is or when World War II began, foundational knowledge is no longer worth teaching. Students lack the mental scaffolding needed to understand advanced concepts.

The fix

Distinguish between retrieval fluency and understanding. Teach core facts, concepts, and vocabulary directly. Use AI only after students have built mental models, not instead of building them.

Institutional and Curriculum Mistakes

Schools add a single unit on AI to the computing curriculum and assume this fulfils their responsibility. AI affects how history essays are written, how maths problems are solved, and how language learning happens. One unit cannot prepare students for that.

The fix

Work with teachers across subjects to identify where AI tools are used in their field and redesign assignments and teaching to account for AI's actual presence in students' working process.

Universities ban ChatGPT and Gemini but do not change grading standards or assessment design. Grades stay the same even though the work students are capable of producing has changed. The credential becomes meaningless without the capability shift to match it.

The fix

Adjust learning outcomes and assessment difficulty upward to match a world where students have AI support. Grade on what students can do with AI assistance, not what they could do without it a decade ago.

Schools continue teaching students to write persuasive essays, summarise texts, and recall facts. ChatGPT writes persuasively, summarises well, and retrieves facts reliably. These skills no longer prepare students for work or further learning.

The fix

Redesign curriculum to focus on what AI cannot do: creating value in novel situations, ethical judgement, collaboration, asking better questions, and judging when AI output is wrong.

Schools scan essays for AI detection but do not observe how students learned. A student may pass integrity checks by submitting AI output differently or may fail to learn despite original work. Submission checks miss both cheating and learning failure.

The fix

Build integrity checks into the learning process: require students to explain their work, show drafts and working, discuss their thinking in class, and complete some assessment live where you can observe their reasoning.

Worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.