Editor-in-Chief Griffin Krueger takes a long view to call for a short stance on the artificial intelligence industry.
Editor-in-Chief Griffin Krueger takes a long view to call for a short stance on the artificial intelligence industry.
Financial markets suffered a shock Jan. 27 as U.S. tech giants sustained major losses after the Chinese company DeepSeek released an artificial intelligence large language model with the capacity to compete with American alternatives at a fraction of the cost.
The loss in value was especially harsh for Nvidia, the company that produces the chips for AI systems, who experienced the worst single-day drop in the history of the New York Stock Exchange, shedding $600 billion in market cap, The Associated Press reported.
Despite Nvidia’s drop, it’s still the third highest valued U.S. stock — currently worth $2.9 trillion. As a whole, the “Magnificent Seven” stocks — Nvidia, Meta, Apple, Google, Microsoft, Amazon and Tesla — have a combined market cap of $17.59 trillion. Analysts have lumped these companies together due to their consistent gains and the notion they stand to gain the most from AI.
In recent years, the S&P 500 index has reached all-time highs, mostly powered by the Seven’s growth. Hype around AI development has fueled the boom.
The unprecedented growth has been staggering — a month before the 2022 release of ChatGPT the Seven had a combined value of just $7.86 trillion.
It’s not just the financial world where AI has gained outsized influence. AI has become a hot topic in schools, offices and healthcare settings. It’s become so pervasive in our broader dialogue you’ll find no fewer than three articles on AI in the opinion pages of this week’s Phoenix.
This begs some questions — what actually is AI? And is it really worth $10 trillion?
AI is an amorphous moniker that can be applied to many different things. I’ve been playing AI in NBA 2K my entire life, while the Stephen Speilberg movie “AI: Artificial Intelligence” first premiered in 2001.
Today, when we discuss AI, what we mean is large language models like ChatGPT or image generators such as Google Gemini. While these technologies have impressive capabilities, they aren’t conscious or capable of actual intelligence.
Despite their ability to replicate realistic text, LLMs aren’t “true AI,” according to Big Think.
A truly intelligent system would more resemble what’s often referred to by researchers as artificial general intelligence, or AGI. Theoretically, AGI would be able to perform any task a human can — but anything like it is currently far from reality.
Although Google’s chief AGI scientist Shane Legg said there’s a 50% chance AGI will be developed by 2028, according to Time, researchers can’t agree on whether it’s actually possible.
In many respects, AI has become a buzzword, a marketing term even — something to associate a start-up with to ensure massive investment and public valuation. Many pre-existing companies have jumped on the trend too, including those familiar to students, like Grammarly.
The original Grammarly existed long before its AI integration, but the company’s profile has risen after hopping on the AI wave.
Seemingly, investors will buy anything if AI is in the name.
This may seem reminiscent of the late ‘90s, when Wall Street was buying anything Dot Com. But don’t worry, blindly buying into a hyped-up technology with a lack of broad understanding is so Y2K.
Just take it from the AI pioneers themselves.
Bret Taylor, who briefly served as chairman of OpenAI while Sam Altman was temporarily ousted as CEO, said in an interview on a venture capitalist podcast, “I think we are in a bubble.”
He went on to say the bubble is akin to the Dot Com bubble of a previous age, according to Business Insider.
“A huge percentage of the gains in the stock market over the past 30 years have more or less been these digital companies created in the dot-com bubble,” Taylor said in the interview.
“I think the same thing is likely to happen in AI. We will look back and laugh at some of the excess, but I am confident we will have a brand-defining, likely trillion-dollar consumer company come out of this.”
There it is. Investors may be a bit overzealous right now, but in due time this excitement will bear fruit and we’ll enter a new world of AI-fueled prosperity.
There’s just one problem. No one is actually making any money.
Venture capital firm Sequoia estimated generative AI will need to see annual revenue of $600 billion — 100 times the current revenue for OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s Co-Pilot and similar services — to justify the investments companies are currently making.
OpenAI anticipated annual revenue of $3.7 billion in 2024 with $5 billion in expenses, according to The New York Times. Something doesn’t add up.
What’s more, Sam Altman wrote in a Jan. 5 blog post, OpenAI’s ChatGPT Pro, which costs users $200 a month to use, is not profitable due to high operating costs.
Meanwhile, pretty much every other AI tool remains free to use for anyone with a laptop and an internet connection. It’ll be hard to get users to start paying high fees for a product they’ve grown accustomed to using for free and — so far — the promised trillion-dollar product is yet to emerge.
Google and Meta have both added AI features to their products, but they’re also free to use. Massive investments are needed to not only develop these systems but to keep them running.
I may have a Gemini assistant for my gmail account, but the service is still free. All the while, Google must pour billions into chips and data centers.
What definitely isn’t free is the electricity powering the massive data centers that run AI systems. A Jan. 28 RAND Corporation report estimated these centers will need an additional 10 gigawatts of power capacity by the end of 2025.
If current trends continue, data center electricity demand could reach 327 gigawatts by 2030, a 237% increase from 2022. To put that in perspective, one gigawatt can power 750,000 homes, according to CNET.
Setting aside the climate concerns associated with pumping up electricity demand, costs associated with AI are only going to get more expensive.
There are a myriad of problems with the products currently available. Companies are rapidly running out of data to train their LLMs on, according to Forbes, raising concerns developers won’t be able to fulfill their promises to scale these technologies into ever-better AGI systems.
The modern work environment is dependent on the newest information — and adaptability is everything. LLMs have cut-off dates for the data they feed on, which are several months behind the present. ChatGPT currently has a December 2023 cut-off date.
These limitations are clearest in questions relating to current events. A December audit by NewsGuard, an organization which rates the accuracy of news outlets, found the 10 leading LLM chatbots spit out fake information 40% of the time.
The percentage of responses containing false claims or a non-response was 62%. When similar tests were applied to DeepSeek, NewsGuard found a fail rate of 83%, according to Reuters.
That may be well and good, but people don’t use AI for news — they use it to generate papers for their classes. So how do LLMs stack up in this realm? Anecdotally, if you ask most professors, they’ll say AI writing often mixes up arguments, lacks sufficient sourcing and just plain makes things up.
Researchers have termed the phenomenon of LLMs giving false answers or making up information to fill data gaps “hallucinations,” according to Nature. It’s not just wild recommendations to add glue to your pizza, LLMs often make up broad swaths of information with no factual backing.
I don’t know about you, but that sounds like something I’d rush to pay $200 for.
I’m not saying this technology doesn’t have potential. I have no doubt innovation will continue, and I’m not ignorant enough to ignore how these systems have already changed our world. But I don’t think any of this is worth $10 trillion.
With that in mind, I thought I’d put it to the test.
I asked ChatGPT to “write an opinion piece for a college newspaper on how AI stocks are overvalued.” Here’s the first few paragraphs of what it offered back.
“In recent months, artificial intelligence has taken the financial world by storm. Tech stocks, especially those tied to AI development, have seen meteoric rises, with many of the biggest players in the market reaching valuations that seem to defy logic. AI’s potential is undeniably vast, with applications in everything from healthcare to entertainment, but the question remains: are these stocks actually worth their skyrocketing prices, or are we witnessing the latest financial bubble?
“At the heart of the AI stock surge is the overwhelming optimism surrounding the technology’s future. Investors have poured money into AI companies, hoping to capitalize on what could be a transformative technology. But stock prices aren’t always rooted in reality. The market’s current obsession with AI resembles the dot-com bubble of the late 1990s, where speculation drove up tech stock valuations far beyond their actual earnings potential.”
“AI is still in its early stages. While we’re seeing impressive advancements in areas like natural language processing, machine learning, and robotics, the practical application of these technologies is still being developed. Major breakthroughs are likely years away, and regulatory hurdles, ethical considerations, and technical limitations remain significant roadblocks. This doesn’t mean AI isn’t worth investing in—it absolutely is. But the current prices suggest that investors are expecting immediate, substantial returns, which is simply unrealistic.”
It’s fine, maybe even good. The writing is technically sound, but it lacks specificity and doesn’t cite anything to back its claims. It lacks any personal style or lived experience to pull from.
To solve this, I could meticulously write a prompt telling the LLM the sources I want it to use, the points I want it to hit and the conclusions I want it to draw.
But at that point I may as well just write the essay.
Griffin Krueger is the Editor-in-Chief of The Phoenix. He began working for The Phoenix during his first week at Loyola and has been writing about the university, the surrounding community and the city of Chicago ever since. Krueger previously worked as Deputy News Editor and Sports Editor and is a fourth-year studying political science with a minor in history. Originally from Billings, MT, he enjoys reading and exploring the city on his bike.
View all posts