AI and the Death of Quality Research

Staff writer Sadie Harlan ruminates on the harm in replacing traditional search engines with AI.

ChatGPT is a generative AI chatbot developed by OpenAI. (Kayla Tanada | The Phoenix)
ChatGPT is a generative AI chatbot developed by OpenAI. (Kayla Tanada | The Phoenix)

It’s 2012, and students stand single-file in front of their local elementary school’s computer lab. For some youths in the mid-2000s, this so-called class is an hour of complete freedom to play whatever vaguely educational game they please, whether it be Friv, Starfall or CoolMath Games

For others, it was an opening to a wealth of knowledge packed into a pixelated screen. With platforms like Bing, Google and DuckDuckGo, kids were able to scour the depths of the internet for shiny new bits of information. Looking things up seemed so simple back then, with fewer flashy ads, more reliable information and a lack of overcrowding from websites like Reddit and Quora. 

Google was founded in the late 1990s with one clear goal: to make research easier. By consolidating thousands of links across the worldwide web, it singlehandedly was able to create an accessible platform to connect information and users across different corners of the internet. 

Now, nearly half of consumers turn to generative AI platforms like ChatGPT and Claude to ask their easily-searchable questions over traditional search engines. Sure, this change can be chalked up to computer users trying to adapt to the ever-changing landscape of technology, but it doesn’t make this switch less concerning. 

Though it’s had its ups and downs, Google hasn’t always been overrun with unreliable information. Plenty of adults can still remember the need to type specific words in quotations on the search bar to get the desired result. Today’s younger generations can remember when there wasn’t an AI assistant on every search made.

Zooming out of the situation, the switch to AI can almost make sense. ChatGPT makes it easier to find information without scouring the fourth Google page in search of the right source. Claude can manage schedules and meetings based on a few pieces of information. Even Amazon Echo has its own AI, willing to help plan things like vacations and workdays.

However, in 2024, data centers accounted for nearly 1.5% of the world’s energy consumption, and the number’s rising each year. This type of constant energy usage and expenditure isn’t sustainable to the Earth’s already fragile environment. The picture is clear: Carbon emissions and factories have weakened our world enough, and AI is poised to strike the killing blow

The shift toward AI is taking away from small businesses and content creators as well. As consumers make the switch to ChatGPT to learn to fix problems like replacing car engines or broken household utilities, those who have jobs relying on repairs, contracting or helping others are being put out of work. 

These changes aren’t safe either. AI can often get information wrong — at best, leading to a minor mistake, but at worst, causing repair damages and issues for the user just because of faulty advice. 

Additionally, with Google and other search engines, the user determines which information to trust, what’s reliable and what isn’t. It’s a different story with generative AI. Because platforms like ChatGPT train their bots on Reddit and other unreliable sources, the information they spew back to the user is riddled with inaccuracies and falsehoods.

Using ChatGPT to do research — especially research which requires depth and accuracy — is unsuitable. The information it provides can be garnered from any corner of the internet, not just the trustworthy websites. 

Amidst this new age of misinformation, AI usage unabashedly stands out among the top perpetrators of false knowledge and sloppy research. Its harm seems immeasurable. It consistently baffles internet users into not being able to believe their own eyes. It dominates schoolwork and inserts itself into almost every nook and cranny on the internet. 

Today, it feels like AI is everywhere and there is no way to stop it. Especially now, when internet users are unable to even look things up without turning to ChatGPT, the damage sticks out like a sore thumb. Between the environmental harm, the unreliability and the lack of human touch, AI’s new role as a search engine is ill-fitting. 
Unlike 2012, internet users these days don’t turn to Google for their knowledge needs anymore, and computers have become a fixed installment in student life across the United States. With AI usage on the rise, the next generation will have the same technological challenges we face today, only emphasized by the reliance of generative AI in the face of research.

Tags

Get the Loyola Phoenix newsletter straight to your inbox!

Maroon-Phoenix-logo-3

SPONSORED

Latest