Professor Joseph Vukov on ‘Staying Human in an Era of Artificial Intelligence’

This is the philosophy professor third book. He explored ideas surrounding the difference between the human mind and the capabilities of artificial intelligence.

By
Associate Professor of Philosophy Joseph Vukov recently finished his third book on the use of artificial intelligence. (Courtey of Joseph Vukov)
Associate Professor of Philosophy Joseph Vukov recently finished his third book on the use of artificial intelligence. (Courtey of Joseph Vukov)

This summer, associate philosophy professor Dr. Joseph Vukov published his third book, “Staying Human in an Era of Artificial Intelligence,” which outlines the “healthy balance” between human values and the possibilities of artificial intelligence.

Vukov is the associate director of the Hank Center for the Catholic Intellectual Heritage. In addition to writing his own book, Vukov said he will be working in a group with the Vatican to coauthor a book in the coming years, and appeared recently on an episode of AMDG — Ad Maiorem Dei Gloriam — podcast about AI. Finally, he instructs a course on AI for the Catholic organization Word on Fire.

Now into his ninth year teaching at Loyola, Vukov said the ever changing AI field is bringing up new questions for him as a philosopher.

Vukov writes specifically of generative AI, including software like ChatGPT, which were released to the public in late 2022. Since its introduction, Vukov said what constitutes appropriate AI usage has become a controversial topic.

For Vukov, AI raises the question on how humans could ethically use it while still retaining their identity.

“By staying aligned with humanity, AI challenges us in that way,” Vukov said. “It does a good job mimicking humans with artwork, poetry and music, things we thought previously that only humans could do. In the face of that, it’s important that we remember what makes humans human. What sets us apart from AI?”

Vukov takes philosophy and ethics further with a fusion class called “The Philosophy and Biology of Neuroethics.” In the class, he said students read sci-fi novels and then use philosophy, science and theology to consider their concepts.

“According to the Catholic view, and I believe other traditions can draw from it, human beings are body and soul,” Vokuv said. “AI have material bodies, but they’re silicon. Humans are biological organisms. Ultimately, we’re immaterial.”

He co-teaches the class with biology professor Dr. Michael Burns, who covers the scientific, conceptual thinking.

“I do a lot of computational biology,” Burns said. “A lot of the data analysis involves coding, and it turns out that AI can figure out syntax for me, and help me get to my endpoint in a pipeline of code.”

Vukov was Dr. Gina Lebkuecher’s dissertation advisor while she worked on her Ph.D in philosophy. She said she helps with the philosophy and ethics side of the class.

“I learned while working with Dr. Vukov that generative AI is probabilistic,” Lebkuecher said.  “It gives the appearance of generating information, but sometimes it’s a best guess.”

AI can offer advancements in many other areas as well, Burns said, like the medical field. For instance, some researchers are looking into the possibility of using AI to read and interpret X-rays, the Associated Press reported

“Medical records need to be transcribed, and AI can do that,” Burns said. “The radiology department can have AI look at MRIs and X-rays to provide a suggested diagnosis to a clinician. AI is not likely to have a bad day, like a doctor may.”

A statement posted in August 2023 by the Office of the Provost, stated Loyola students are only allowed to use AI assisted technology if specifically authorized by faculty.

“It breaks my heart,” said Dr. Amy Shuffelton, a professor in the philosophy department, about the academic repercussions students face after using AI.  “Students should be taking classes that are exciting to them, meaningful. I would hope that the assignments professors are giving are inspirational enough to be reason for students to do them. I’m not here to be a cop. I’m here to help you learn.”

Shuffelton, like Vukov, contends that AI can have benefits.

“I think there are also a lot of really neat uses of AI,” Vukov said.  “It can help you brainstorm or give you ideas for a paper. It can also be useful to see an idea for a paper that AI suggests and think to yourself ‘Oh, I see a way I can alter that and make it better!’”

Some students, however, don’t see a reason not to utilize AI for specific tasks.

“It frustrates me that so many teachers preach against AI,” Sydney Craig, a first-year advertising and public relations major, said. “These tools can be used correctly and effectively in our digitally progressive generation.”

In addition to the educational and medical fields, AI has several potential applications which could apply to the smaller, personal details of someone’s life. Social media apps use AI to predict what content their users might want to see, and internet browsers predict what a user might want based on only a few words typed into a search box.

“AI is a tool that can do lots of things for us that can take away menial tasks that, once upon a time we had to ask or pay other people to do,” Shuffelton said.  “AI is faster, cheaper, better.”

Topics

Get the Loyola Phoenix straight to your Inbox!


ADVERTISEMENTS

Latest