University Protocol Aims to Limit Student Use of AI

The use of artificial intelligence amongst Loyola’s student body prompted Loyola’s Office of the Provost to update their statement regarding academic integrity on Aug. 31.

By

The use of artificial intelligence amongst Loyola’s student body prompted Loyola’s Office of the Provost to update their statement regarding academic integrity on Aug. 31. The statement encourages students and faculty to uphold Loyola’s community standards along with academic honesty, 

Professor Bridget Colacchio, who works in the Office of the Provost, said the rise of artificial intelligence and ChatGPT has created complications for academic integrity, leading Loyola to implement policy changes over the past academic year to combat plagiarism in classrooms.

Loyola’s Office of the Provost, which provides the infrastructure and resources for faculty at Loyola, became increasingly worried about the use of AI within the last academic year, according to Colacchio. Colacchio said it became difficult to navigate how faculty should approach helping students get the most out of their education without the possibility they are falsifying their assignments.

“We became aware that faculty and instructors were worried about two things,” Colacchio said. “One: How do you know if students are using AI for their assignments? And two: How do I teach in a way that is helping students get the most out of their education?” 

While Loyola’s rules regarding academic integrity have always denounced the use of plagiarism, they didn’t previously mention the use of AI specifically.

“To maintain our culture of excellence and integrity, students are not to use AI assisted technology in the classroom unless they are specifically authorized to do so by their faculty for an assignment, a test, a quiz, or any deliverable that will be graded,” an Aug. 16 statement reads. 

Professor Robyn Mallet, the vice provost of academic programs and planning, said she works directly with Loyola’s professors to navigate creating assignments and informing them on the existing policies and practices the university holds for academic integrity.  

“I’ve had a lot of inquiries from faculty about how they should support students at this moment in our culture,” Mallet said. “This is a unique moment in time and they want to know what the university’s standards and regulations are so that they can inform their choices in the classroom.”

Associate philosophy professor Joe Vukov and associate biology professor Michael Burns, said they are working to combat the use of AI plagiarism by teaching students how to use it to benefit their learning.

After noticing the use of AI prior to ChatGPT’s release, Vukov and Burns began introducing AI to students at the beginning of the fall 2022 semester in order to teach students the correct way to use artificial intelligence. They have been working on creating assignments that can help guide their students how to use AI in a specific way based on the content. 

“As long as you’ve got assignments that are feeding into a well-designed course and a well-designed assignment, then I think ChatGPT can be a really good tool for generating content that’s feeding into those assignments, in the same way that you might use something like the internet,” Vukov said. “The internet is a great resource if you’re using it as a tool for learning in the context of larger goals.” 

Along with informing professors and faculty on policy language, the Office of the Provost encourages student feedback and communication in the discussions around AI usage because it’s the student’s education that is being most affected, Colacchio said. 

Third-year student Lizzie Carol has learned about AI mostly through social media but has also seen it addressed in her classes.

“In most of my classes, my professors have said it’s not something you should use specially to make content for your assignment,” Carol said. “They say it’s not as big of a deal to gather information, but it’s not always accurate information. 

In a class that Carol is required to do coding work, her professor has emphasized that AI can be helpful, but she said the information it produces can sometimes be false. 

First-year Neha Vadakumchery has not used ChatGBT or AI herself, but she understands the learning benefits it can bring to some students if used correctly, she said.

“I do know that other people use it and it does help them, ” Vadakumchery said. “I know my friend uses AI, but it’s not to cheat, it’s more to understand the content [of her assignments].”

First-year student Kateri Martinez said she doesn’t see the AI being useful in classroom settings. 

“I like looking at AI-generated memes, but I do not think it is necessary to use AI in a classroom, because it is not at the same level as a professor,” Martinez said. 

As AI continues to develop, Mallet said Loyola aims to incorporate the use of AI into learning in order to both counteract the temptations for students to use it for plagiarism and to help guide students into the world’s future of technology. 

This article is by Amy Rupsis and Anna Waldron

Featured photo by Ryan Pittman

Topics

Get the Loyola Phoenix newsletter straight to your inbox!


ADVERTISEMENTS

Latest