Many experts believe AI chatbots like ChatGPT are here to stay. So how can they fit into the academic world? We looked for answers.
Shortly after the Artificial Intelligence chatbot ChatGPT was introduced to the world last November, English teacher Jon Fila says he got a wakeup call.
“I started noticing some odd types of submissions from my students, with the qualities of writing I might not expect,” he recalls.
Fila, who teaches online courses to high school students in the metro, has written several guidebooks about using AI in the classroom and on campus.
“It’s not something we should be afraid of,” he declares. “But it’s definitely something we need to learn about more, before we just blindly dive in.”
With the start of the school year just around the corner, educators acknowledge that programs like ChatGPT are already a presence on the academic landscape.
“Educators now have to decide that the work they are grading, is that something that has been written by a student, or perhaps a model? There’s the concern,” says Manjeet Rege, with the Center for Applied Artificial Intelligence at the University of St. Thomas. “As an educator, we would want students to build their analytical thinking. It’s never about the end answer, it’s the path to get the answer.”
One of the biggest concerns?
Plagiarism.
“That is a very sad, sad student, when you’re doing cutting and pasting without any analysis of it,” notes David Nguyen, a Senior Fellow with the Technological Leadership Institute at the University of Minnesota.
He says he’s encouraging students to use these programs as a learning tool and classroom prep.
“We want the student to embrace this technology,” Nguyen states. “We want them to help with the clarity of thought… get this tool to help you write, so that you can communicate with the other person.”
For students, this is also new academic territory.
Carter Shimek, an incoming University of Minnesota Senior, says the school has sent him emails, warning against plagiarizing language from an AI program— and to consult his instructors if he has questions.
He says he doesn’t use programs like ChatGPT but has friends who do.
“I’ve heard people say you know, using it in a way to sort of a way to aggregate research papers mostly,” he explains. “It’s more like it creates an outline, a sense of understanding broad ideas and then they use that to kind of form their own opinions on it.”
Nationwide, some schools are requiring students to show editing history and drafts of their projects to prove their thought process.
There are AI detection programs out there, but most experts say that while they’re good at confirming human work, they’re spotty at identifying chatbot-generated text.
“The problem is, they’re not accurate,” Rege says. “So, there are false positives.”
Fila says he’s convinced these programs are here to stay— and he’s trying to find ways to discuss with his students about appropriate, ethical use of AI.
“I could spend my time trying to ChatGPT-proof assignments that they would quickly learn to get around, or I could accept the fact that we live in a world that now incorporates these tools in almost everything we’re doing,” he says. “Sure, use it to lighten some of your workload and get a first draft out, but after that, I want to know what sources can you identify that might validate the claims being made. How can you personalize it to incorporate more of your own experiences and culture related to the topics.”