When AI Breaks the Exam
During a recent review of a fully online Master’s programme, a familiar question surfaced: how do we know the student actually did the maths?
Generative AI has intensified this concern, but I argue that it has not created a crisis in assessment. It has revealed structural weaknesses that were already present in our reliance on unseen exams and correctness as primary evidence of learning.
If digital environments make surveillance based control both more difficult and more problematic, then we need to ask deeper pedagogical questions. What counts as valid evidence of learning? How should academic integrity be understood when tools are ubiquitous and powerful?
I explore these questions in my latest blog post:
e-learning-rules.com/blog/0059…
I welcome thoughtful discussion from colleagues working in digital and distance education.
#DigitalPedagogy #AIinEducation #Assessment #HigherEducation #onlinelearning
Retro-futurist scene: armoured cyborg seated in a chair wired to machines, a luminous figure reaches toward him, and a giant brain in a dome above a neon city and stars.
Jacob Urlich 🌍 likes this.
Jacob Urlich 🌍 reshared this.
Jacob Urlich 🌍
in reply to Steve • •You need to understand your students, because you are out of touch. You may have skills, but for students they are useless. Students are paying for this.
Give us something worth studying — something that makes a real difference, real-life experience!
Jacob Urlich 🌍
in reply to Steve • •Look for people who genuinely want to engage in the task. Let us experience teamwork properly — the frog died because we were not able to work as a team.
Jacob Urlich 🌍
in reply to Steve • •But you cannot dismiss them. You are knowledgeable — you need to bring learning to life for them.