APRIL 19 — The official answer is, of course, yes. The unofficial truth may be more lengthy.
Yes, but…
Since most university students are adults, it’s really up to them. We can’t be checking on every assignment submitted by every student.
They paid their fees and it’s their education, so ultimately, they have to be responsible for the quality of their own learning.
No doubt chatbots have made it much easier to submit “fake” assignments but, hey, even without AI detecting cheating isn’t fool proof because a student can simply ask another person to do their work for them.
And if you’ve got hundreds of students, how is it feasible for the lecturer to be checking everyone?
Of course, the option of making every student sit for handwritten exams is there (and this is still being widely done). But online assignments remain part of the course, and they cannot be removed entirely.
And aren’t institutions supposed to move away from traditional modes of education and assessment?
Yes, but…
AI detection software isn’t cheap and their rates are rising. Institution funds are already tight, so sometimes we encourage lecturers and/or students to do their own AI checking and maybe submit a declaration of originality.
Furthermore, many foreign students (or students struggling with English proficiency) use AI platforms for translation purposes all of which “show up” as high AI use.
Needless to say, it’s very hard to distinguish “co-pilot writing the student’s assignment” (bad) and “Gemini translating the student’s assignment” (neutral).
Yes, but…
Even if the AI-detection software flags some “inappropriate” use of AI there’s still the question of whether the student actually used AI to write that particular sentence or whether it’s simply an error on the part of the software.
There have been many cases where such software claims that a paragraph wasn’t written by a human but in fact it was. Eg, simply run the Gettysburg Address through a few “anti-AI” programs. Chances are some portions of the speech will be flagged.
A huge problem with AI detection is that, unlike plagiarism-detection, it often cannot be “proven” that a student used Deepseek (or whatever) to write her assignment.
With plagiarism, it’s easy: Anti-plagiarism software can simply list down the websites and paragraphs which look very very similar to what the student wrote.
With a chatbot, however, it’s almost impossible to prove that a student used said software to construct a paragraph.
Almost the only way to verify if a student wrote something is to conduct interviews with said student.
This works very well especially for post-graduate students. But if the student numbers are very high (say, into the hundreds) it becomes impractical.
So, do educational institutions care about AI-written assignments?
It’s, uh, complicated.
* This is the personal opinion of the columnist.
You May Also Like