Instructors from Computer Science and the Humanities weigh in on the ethics of AI in the classroom.
Instructors from Computer Science and the Humanities weigh in on the ethics of AI in the classroom.
Pro-AI
In our courses, Formal Methods in Software Engineering and Programming Languages, we’re evolving the classroom environment. We encourage our students to leverage large language models (LLMs) like OpenAI ChatGPT, Anthropic Claude and Google Gemini as active collaborators.
By integrating these tools into their regular assignments, students are learning to navigate the complexities of modern software development in real time.
In a technical setting, students use LLMs to resolve intricate dependencies and identify potential security or performance bottlenecks within their code. These tools are particularly invaluable when navigating the nuances of specialized programming languages like Scala or the formal specification language TLA+.
These languages carry a steep learning curve, but LLMs provide a translation layer between a student’s conceptual logic and correct source-level syntax.
We believe it’s vital for students to engage with this technology now. Understanding both the remarkable capabilities and the inherent limitations of LLMs is a prerequisite for the modern workforce. As the industry moves toward adopting these tools as “force multipliers,” the ability to audit LLM-generated code for accuracy, efficiency and security becomes a core competency for any graduating software engineer.
Currently, much of our classroom interaction with LLMs follows a “call-and-response” pattern: a student issues a prompt, reviews the output, and manually integrates it into their workflow. However, we’re on the precipice of a shift toward Agentic AI. Unlike standard chatbots, agentic systems are capable of acting independently to accomplish vaguely defined, high-level tasks.
This evolution will fundamentally change the software development life cycle. While we currently see challenges regarding energy consumption, financial costs and technical complexity, these barriers will likely diminish as the technology matures.
We’re moving toward a future where AI’s no longer just a tool we talk to, but an autonomous partner capable of multiplying the output of individuals and teams at an unprecedented scale.
The future of AI is expansive. While frontier labs are increasingly cautious about public releases due to the sheer power and potential risks of these models, the underlying trajectory remains clear. AI will eventually touch every phase of software engineering. By embracing these tools at Loyola today, we are ensuring our students don’t enter the workplace as mere observers of this revolution, but as the engineers leading it.
Anti-AI
My policy is “strictly no AI use for any of the assignments.” The reason behind the policy is AI decreases our cognitive capacity and undermines the ability to think critically, make connections and solve problems. Those are critical skills for students in any major because being able to process problems, articulate logical arguments and inferences is required in any line of work.
I think students find AI to be a great shortcut when they’re overwhelmed with schoolwork, but it can have lasting damage, and this is starting to be seen in both academia and workplaces.
As a community, I think the university should adopt a universal policy regarding AI and ensure compliance from both faculty and students.
What we’re seeing is a variety of policies in classrooms and rising frustration among faculty who restrict AI use for assignments because faculty aren’t given clear guidelines on how to confront students who violate the rules. Because of this, students are getting away with it simply by claiming Turnitin is wrong.
We also see an invisible battle with faculty who allow AI use because students use those examples to justify why they use AI to write assignments. I believe faculty who encourage or allow AI use without explaining the pitfalls are doing a disservice to students.
Our job is to educate and transfer skills to our students. If allowing AI in the classroom is taking away the ability to think critically from students, then we’re failing them.
What I tell my students is to forget about the grade; your worst self-written paper is better than any AI-written paper. Because when you think and put down arguments on paper, you’re practicing invaluable skills, which will be with you for the rest of your life. When we socially delegate thinking to a machine, logical human thought is doomed.
The Pro-AI section is by PhD Student Nicholas Synovic and Computer Science Professor Konstantin Läufer. The Anti-AI section is by Interdisciplinary Honors Program Lecturer Ghazal P. Nadi.