Loyola Panel Explores How AI is Reshaping Governance and Political Systems

Faculty warn AI may be “just another technology,” but one increasingly shaped by unchecked power, surveillance and global stakes.

The Artificial Intelligence and Governance panel was hosted at 7 p.m. March 24. (Niko Zvodinsky / The Phoenix)
The Artificial Intelligence and Governance panel was hosted at 7 p.m. March 24. (Niko Zvodinsky / The Phoenix)

The question of “what are we governing” was posed by adjunct professor Griffin Thompson to the AI society and Honors Student Government’s panel — entitled Artificial Intelligence and Governance — and became a theme for the evening’s discussion 7 p.m. March 24.

The panel, led by Honors Student Government president Marco Alvarado, hosted three experts in government and political science and covered AI’s effect on government, democracy and international politics.

Brian Endless, Jennifer Forestal and Thompson are all Loyola faculty. Thompson is an adjunct professor who has recently retired from the United States Department of State as the director of the Office of Renewable Energy and Energy Efficiency, according to Loyola’s website. Forestal is a political theorist who specializes in democratic theory, according to a flyer handed out at the event. 

The question of what legislators, businessmen and citizens are trying to govern and regulate had a simple answer  — technology.

“AI is simply another manifestation of a long series of technologies that have all raised the question of, ‘How do we govern them?’” Thompson said during the panel. “Therefore we need to place AI and the governance issues in the context of, how have we historically responded to technology, as humanity, as societies, and especially as Western civilization.” 

Although AI is frequently demonized, according to the panelists, it’s still just another technology. Panelists frequently referred to AI as a toy, dangerous if wielded with the intention of maximizing profits. 

“We in the west have a pathological passion for the gadget,” Thompson said.

The panelists frequently returned to the image of the AI gadget as a toy wielded in the hands of man-children, the powerful few tech CEOs who create the rules which govern the new technology. 

“It’s the boys in Silicon Valley defining [the rules] for us,” Endless, a political science lecturer who specializes in post-conflict governance, said. “Who’s giving them the keys? No one. They’re taking them.” 

This technology, Forestal suggested, sometimes appears to act in ways that would be illegal for people — for example, large language models do not pay for the content used to train them. 

“My book was stolen by meta to train their AI system,” Forestal said during the panel. “I did not consent to that. If you want to buy my book and train your LLM, you can pay $35 dollars and do that, right? The larger questions are about power.”

Along with the data-consuming processes required to train AI technology, the panelists acknowledged AI’s use as a surveillance tactic, referencing President Donald Trump’s use of monitoring by ICE. 

“The surveillance system that these guys want to put in place through is simply a digital expansion of the thug type of government that we have now,” Thompson said during the panel.

Endless also extended this discussion to surveillance in non-democratic states.

“This is [an] authoritarian’s happiest playground,” Endless said. “I have dealt with a number of authoritarian governments, Rwandan very firsthand, and people are just salivating at the possibilities of what this can do for surveillance, for control, to stop people from dissenting, to better point out the people who are likely [to].” 

Student reaction to the panel was mixed. Some students agreed with Forestal’s staunchly anti-AI status, and some thought the panelists were overreacting in their fears of AI. 

“I already feel anti-AI,” fourth-year political science student Meghan Brentana said. “I think it has helped me learn how to navigate conversations with people who feel differently about AI from me.”

Other students took a pro-AI stance, undeterred by the panelists’ warnings.

“My biggest concern [is] we see AI as an enemy,” fourth-year international business and information systems major Jaime Velazquez said. “[Wherever] you’re gonna work at, the AI is still there.”

In spite of each of these views, the cliche is true — AI is inevitable, according to Velazquez. Anyone in the United States with an internet connection can access ChatGPT or Claude.

At the end of the panel though, Endless offered a hypothesis for future AI regulation.

“To me, the most likely way we’re going to end up governing AI is that China and the United States, possibly plus Russia, are going to start treating AI like nuclear weapons,” Endless said. 

While Endless acknowledges potential concerns behind the claim, a mutual understanding of the dangers of AI paired with efforts to moderate a total AI takeover to prevent total AI destruction may be what finally balances the advanced technology.

Regardless of the method used to protect people from AI, there is hope for whatever it will be.

“I’m leaning more hopeful, but [the panel] gave me further reassurance of how much power we have in this system,” Brentana said.

Tags

Get the Loyola Phoenix newsletter straight to your inbox!

Maroon-Phoenix-logo-3

SPONSORED

Latest