When AI Breaks the Exam
During a recent review of a fully online Masterβs programme, a familiar question surfaced: how do we know the student actually did the maths?
Generative AI has intensified this concern, but I argue that it has not created a crisis in assessment. It has revealed structural weaknesses that were already present in our reliance on unseen exams and correctness as primary evidence of learning.
If digital environments make surveillance based control both more difficult and more problematic, then we need to ask deeper pedagogical questions. What counts as valid evidence of learning? How should academic integrity be understood when tools are ubiquitous and powerful?
I explore these questions in my latest blog post:
e-learning-rules.com/blog/0059β¦
I welcome thoughtful discussion from colleagues working in digital and distance education.
#DigitalPedagogy #AIinEducation #Assessment #HigherEducation #onlinelearning
Retro-futurist scene: armoured cyborg seated in a chair wired to machines, a luminous figure reaches toward him, and a giant brain in a dome above a neon city and stars.
Jacob Urlich π likes this.
Jacob Urlich π reshared this.
Jacob Urlich π
in reply to Steve • •You need to understand your students, because you are out of touch. You may have skills, but for students they are useless. Students are paying for this.
Give us something worth studying β something that makes a real difference, real-life experience!
Jacob Urlich π
in reply to Steve • •Look for people who genuinely want to engage in the task. Let us experience teamwork properly β the frog died because we were not able to work as a team.
Jacob Urlich π
in reply to Steve • •But you cannot dismiss them. You are knowledgeable β you need to bring learning to life for them.
Steve
in reply to Jacob Urlich π • •Jacob,
I hear what you're saying.
If a task feels pointless, students will optimise it. In an AI-saturated world, that is predictable. When someone says they will use AI to get a boring job done, that often tells us more about the design of the task than about the character of the student.
I agree that learning should involve real problems, not artificial exercises that can be completed mechanically. Education should develop judgement, decision-making, and the ability to work with uncertainty. Those are not things AI can simply replace.
At the same time, correctness still matters. In many fields, getting the maths right is not optional. The deeper question is whether students can explain, justify, and apply what they produce, not just generate an answer.
Your point about teamwork is important as well. Collaboration only works when roles, responsibility, and shared purpose are properly structured. Otherwise it becomes tokenistic.
We probably agree on more than it seems. If students are bored, that is a design issue. The response should not be dis
... Show more...Jacob,
I hear what you're saying.
If a task feels pointless, students will optimise it. In an AI-saturated world, that is predictable. When someone says they will use AI to get a boring job done, that often tells us more about the design of the task than about the character of the student.
I agree that learning should involve real problems, not artificial exercises that can be completed mechanically. Education should develop judgement, decision-making, and the ability to work with uncertainty. Those are not things AI can simply replace.
At the same time, correctness still matters. In many fields, getting the maths right is not optional. The deeper question is whether students can explain, justify, and apply what they produce, not just generate an answer.
Your point about teamwork is important as well. Collaboration only works when roles, responsibility, and shared purpose are properly structured. Otherwise it becomes tokenistic.
We probably agree on more than it seems. If students are bored, that is a design issue. The response should not be dismissal or surveillance, but better problems, clearer purpose, and learning that connects to real practice.