Assessments for subjects such as history, languages and social sciences are most likely to be affected, given that written work – end-of-semester take-home essays, for example – is a major means of evaluating students’ knowledge of subject matters.
Although the use of AI-based tools is currently banned in at least one institution, the lack of effective AI-detecting software might render such a ban impotent. For the time being, institutions might be better off adjusting the existing assessment methods and taking precautionary steps.
First, students could be required to conduct oral defences of their essays. A panel of examiners could randomly draw students from each class and ask questions based on their papers. Such a spot check could determine whether students wrote their papers themselves, and thus deter cheating.
Alternatively, we can revisit an old-school method: an exam requiring a written essay. During the exam, students could be given a few research articles and required to produce a short essay immediately, without any digital help. This could be a better way to evaluate how much they have learned and retained over the semester.
In the long run, teachers can grade students by their learning progress, instead of their end-of-semester performance, through tutorial discussions. Some classes can be given, not as lectures but through tutorials, where students are asked to express their views on certain discussion topics. Teachers can then appraise how well students articulate their views and whether they think critically in response to others. This kind of observation – especially of students reacting to each other’s ideas – is something AI can’t do for students.
Students could also be asked to critique pieces generated by AI so that they can practise independent thinking, and learn not to rely on AI.
There may be many more ways to balance the merits and risks of generative AI in arts education. Hopefully, the scenes of robots taking over – like in the Terminator movies – will not become our reality.
Alison Ng, assistant lecturer, Centre for Applied English Studies, University of Hong Kong
Test students on what ChatGPT doesn’t know
I am writing in response to the letter, “In the age of ChatGPT, design better tests to foil plagiarism” (April 1).
I appreciate your correspondent’s suggestions and would like to add that question content can also play a critical role in combating artificial intelligence-enabled plagiarism.
To begin with, questions should require or prompt students to draw on subjective, personal experience, reflections and insights.
Additionally, questions should be about events as recent as the past year. This is because OpenAI’s ChatGPT4 only has knowledge of events until September 2021.
Davy Wong, Cheung Sha Wan
South China Morning Post