When asked why artificial intelligence (AI) is controversial in universities, ChatGPT, a popular AI software, cited privacy and security concerns, potential for misuse and a lack of transparency and explainability.
“Overall,” the robot said, “AI is a powerful technology with the potential to bring many benefits to universities, but also raises many ethical and societal concerns that need to be carefully considered and addressed.”
A University of Calgary (U of C) research team is looking to tackle these ethical concerns and help the world understand what the use of AI software, such as ChatGPT, means for post-secondary education.
“We are very excited to be researching the ethical use of artificial intelligence for teaching, learning and assessment in higher education,” lead researcher Sarah Eaton wrote in a statement to the Charlatan. “We know that artificial intelligence is the future and we want to ensure we are supporting faculty, and most importantly student[s].”
Eaton, who is editor-in-chief for the International Journal for Educational Integrity, added AI in education “is already a reality” and Canadian universities must ensure they’re “setting students up for success” by educating them on AI-assisted technology. Working alongside Eaton is Jason Wiens, a U of C English professor who previously served as the departmental academic misconduct officer.
“We’re hoping that our research can provide guidance for administrators in terms of developing policy about the ethical uses of these technologies,” Wiens said, “but also guidance to instructors for how to perhaps find ways to use these technologies as a positive teaching tool.”
A race against technology
Wiens has a particular interest in poetics, specifically the creative applications of AI-assisted or AI-generated writing in modern times. He compared the emergence of technology such as ChatGPT to Mary Shelley’s 1818 novel Frankenstein.
“It’s a novel that deals at an early point in history with these ethical questions around the technologies that humans create and the degree to which those technologies outpace our ability to keep up with them ethically,” he explained.
“Perhaps working with rough drafts generated by AI is something that can build those revision and editing skills that students need.”
Not only is Wiens applying the analogy of Frankenstein to this research; he also plans to have students ask AI to write an essay comparing itself to the novel, then dissect and edit the generated product.
“Perhaps working with rough drafts generated by AI is something that can build those revision and editing skills that students need to develop, as well as improve their own writing,” he said.
Beatriz Moya, a doctoral research assistant at U of C’s Werklund School of Education, has been working on this and other projects about academic integrity. She said she’s seen ethical issues with AI from both a teacher and student perspective.
“We are facing a non-human element that can develop some tasks with little input,” Moya said. “That’s something that might be shocking for many, but I also think that there’s an opportunity here to really step down, reflect and see if there are any other ways to approach this beyond the first reaction and panic.”
This semester, Moya and her colleagues are in their data collection phase, conducting surveys and interviews with students and faculty. They have also been experimenting with software such as ChatGPT by having participants differentiate between samples of human and AI-generated writing.“We need to interact to see what we could get out of these new encounters with this kind of technology,” Moya continued.
She said the issue of AI-assisted technology is too challenging to abandon.
“There is a need to understand this because we will have to deal with [AI] in the future,” Moya added. “[These technologies] exist, and they are spreading.”
Navigating AI at Carleton University
The research at U of C is relevant to institutions such as Carleton University, which recently released a digital strategy and roadmap designed to help students and faculty navigate new technologies. Though the document does not explicitly mention AI, it suggests an emphasis on using digital tools that could include software such as ChatGPT to support classrooms.
Paul Wilson, associate dean for students and enrolment in Carleton’s faculty of public affairs, said ChatGPT “opens up all sorts of avenues for integrating AI into learning.” He added it also introduces fear of a new kind of academic misconduct at universities.
“Learning to write is really learning about disciplined expression,” Wilson said, noting he would hate to see students miss out on gaining such essential skills.
“It’s another twist in technology that has come along and it will bring some challenges.”
Allan Thompson, associate director of Carleton’s School of Journalism and Communication, encouraged students to “really think hard about this” in terms of the implications AI has—not only in schools, but also in the field of journalism.
“It’s another twist in technology that has come along and it will bring some challenges,” Thompson said.
He added that since the use of AI exists on a continuum from spell-check to plagiarized essays, it is important to look at what is acceptable and what is not in how it’s applied.
Moya said outright banning AI-assisted technology “could really damage the future of education.”
“It’s important to think that we can turn this moment into hope for a more ethical future,” she added. “We need to understand our human capabilities, and we need to use the tools we have at hand, but we need to learn how.”
The researchers aim to begin sharing preliminary findings in late May and to share their conclusions in 2024.
Featured image by Mia Parker/The Charlatan.