Essay: Just Saying ‘AI’ is Too Vague

Writer Kailani Moore explores where normal computing ends and AI begins.

Let’s face it — “AI” is a huge buzzword right now. The technological phenomenon has taken over almost every aspect of modern social interactions due to its influence over phones and laptops and it especially infiltrates college campuses. 

Even professors have come to accept the fact there is no escaping the use of AI in their day-to-day, and they’ve begun reasoning with students to determine when it should be acceptable to use in academic settings. 

Reading through your course syllabi, you might’ve seen a clause somewhere noting that the use of AI to complete your work is “strictly prohibited” — in and out of the classroom. This restriction makes sense as the new technology aids in forming unoriginal ideas or thoughts. However, there’s one problem with the phrase “artificial intelligence,” and that is, what does it really mean? Better yet, where do we draw the line? 

Artificial intelligence is an incredibly broad concept, yet it’s something we depend on daily. Phones, laptops, tablets and smartwatches are all examples of highly capable — and yes, artificially intelligent — gadgets that can perform human-like tasks. So, when some professors ban “AI” from the classroom, what exactly are they referring to? 

IBM, a leading worldwide tech company, defines AI as “a field, which combines computer science and robust datasets, to enable problem-solving.” Another source from The U.S. Department of State asserts that it’s “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” 

The dilemma is clear.

If I were to walk through Loyola’s campus and ask students what artificial intelligence is, I guarantee no two students would offer the same exact definition. Similarly, some professors — who may be of a different generation — might suggest more dramatic descriptions of the idea. 

Considering the myriad of assertions floating around campus and society surrounding what AI actually is, it seems careless to ban its use before accurately defining it.

There are lots of modern inventions that fall under the umbrella of AI, simply because they possess the ability to process human-level tasks or produce an encyclopedia’s worth of information in the snap of a finger. 

Grammarly, one of the most popular writing software tools, is a form of artificial intelligence. Does this mean it should be banned from the classroom because it falls into the “AI” category? Some students and professors might say it shouldn’t be banned because it’s a harmless helpful writing assistant, and others might group it with other tools that could lead to academic dishonesty. But, if I replaced Grammarly with ChatGPT, I would likely receive a different reaction. 

The discussion becomes much more nuanced.

We sometimes forget that we’ve been living with artificial intelligence for much longer than we realize. The more socially accepted forms of AI that we don’t even think about, like Apple’s Siri and Amazon’s Alexa, have been dominating our technological sphere for years. Both resources are certainly capable of producing information helpful for a class, though I argue that asking Siri a simple question isn’t as serious of an offense as asking ChatGPT to write your history essay for you. 

Clearly, AI exists in several forms — some more provocative than others. For this reason, we as learners at an institution of higher education have to ask ourselves where this boundary should lie and search for a common definition of artificial intelligence. Not only will it make communication easier once everyone comes to a consensus, but it will also allow for a deeper understanding of the perpetually evolving, tech-based world we’re now forced to navigate. 

Many people see AI as a positive advancement in our society, but others view it as a threat to humans. While some fear AI becoming too powerful over human activity, others regard it as a game-changing technological advancement.  Either way, its rapid rise to fame has brought forth meaningful debates about its viability and even its morality.

Technology imitating human behavior is something else we must learn to live with — and it seems like we’re doing a pretty good job so far. 

The same way we regarded iPhones as revolutionary in 2007 and completely adapted to them is how we’ll feel about more extreme forms of AI in the coming years. Until then, let this article be an invitation to ask yourself where you draw the line.

Feature Image by Holden Green / The Phoenix

Share the post
LATEST