In response to increased academic dishonesty due to use of Artificial Intelligence tools like ChatGPT this year, teachers have begun running written assignments through AI detectors.
AI uses language prediction models to write, especially when responding to a prompt. This formula limits the specificity of the writing style and enables it to be picked up by detectors, according to English teacher and department head Andrew Gibbs.
Turnitin is one software utilized by teachers to scan students’ work for AI-generated information. English teacher Amy Andersen uses Turnitin for AI detecting as well as its other features.
“I have used Turnitin anyway for plagiarism screening, but now it gives us two different measures of originality,” Andersen said. “We can see the percentage of matched material with other web sources and now an AI percentage match.”
However these detectors can flag any work with vague language or phrasing as being AI-generated, according to senior Lauren McGuire. An essay she’d spent hours writing was flagged by Turnitin as having been written by AI.
“[AI trackers] are probably a good idea,” McGuire said. “It’s just not totally reliable to use hundreds of kids’ work because you never know which ones are wrong or who to believe.”
Gibbs has begun running online assignments turned in by students through AI detectors after encountering AI written work turned in by students. He hopes that in the future AI can be incorporated in the classroom in a way that’s not simply copying responses written by AI.
“As there is with any new technology there’s an element of excitement, like ‘How can I use this to benefit students?’” Gibbs said. “But how do you teach how to use those technologies appropriately? AI in general is just so ambiguous in the many different ways that it can be used.”
The introduction of AI chatbots has complicated assigned writing meant to be done outside of the school day because of how easy it can be used to cheat. Although AI is often seen as a concrete hurdle, it’s seen by many as an ethical or moral issue as well.
“It comes down to asking, ‘When does the writing and ideas become somebody else’s?’” Gibbs said. “For me it becomes particularly problematic if [students] are using it to generate text.”
During the college admissions process for prospective fall 2024 students, the use of AI to write or aid in writing personal narrative essays has come up as a concern, according to Education Week. Because of the formulaic models AI writes with, chatbots will craft essays using broad language lacking details and anecdotes only a human can provide.
Gibbs has noticed that if an AI chatbot is fed information or a prompt that includes an error, it will not correct this in the response it formulates, making its potential of spreading misinformation online much higher. It’ll incorporate the wrong information it’s given — whether it be a name or date — directly into the response. When writing for an English class this is easily caught if supporting evidence is incorrectly cited.
“AI is not going to be able to tell your story the same way you can,” Gibbs said. “It cannot generate evidence that you need to support your ideas yet.”
Senior Violet Paisner believes there are positive ways that AI can be used in school as a quick way to search the internet for facts or research.
“I think AI can be beneficial for generating ideas,” Paisner said. “There is just a very fine line where turning something you got a bunch of help writing with your name on it becomes straight up plagiarism.”
As AI chatbots continue to develop, AI detectors will too. The use of these platforms is set to become more widespread and more regulated, making the future of education’s relationship with AI uncertain.
Related
Leave a Reply