Why I (Literally) Don't Buy The AI Hype
There's promise in the sector, but is it enough to justify the valuation?
Since ChatGPT’s product launch in late 2022, the S&P 500—almost 1/3 of which is made up of the Magnificent Seven tech stocks—has risen consistently, in fact by around 60% before the recent correction. While there were other causes for the gains (the end of the Fed’s hiking cycle, optimism about GLP-1 weight loss drugs, strong earnings, and a generally sound economy), a great deal of the bullishness was centered around optimism about AI. Since the beginning of 2023, Microsoft has nearly doubled, while Nvidia stock is up about 8x, making it 40% more valuable than Amazon and about 3/4 as valuable as Alphabet (the artist formerly known as Google).
But is this AI optimism really justified? Let’s take a look.
Isn’t 2.5 Years Enough Time?
The bull case for AI isn’t presented as a case, but as undisputable fact: AI is going to take over the world, and there’s nothing anyone can do about it. Many technology executives, including Elon Musk, have worried publicly that an AGI (artificial general intelligence) could resemble Skynet from the film Terminator 2, and could set up an unfriendly political system or cause a global genocide due to bad programming. Humans will soon have nothing to do because AI will be able to do everything better, and will subsist off basic income because no jobs will exist.
Less dramatically, the near-term business outcome is predicted as follows: AI will lead to a massive replacement of many workers, causing an increase in unemployment, and businesses will rely on AIs to lead numerous tasks, including coding, which was formerly considered safe from automation. OpenAI founder Sam Altman predicted that customer service would be the first domino to fall, with GPTs replacing the bulk of email and chat agents.
First of all, let me say that the promise of AI is undeniable. As a data analyst/data scientist I work with machine learning tools on a near-daily basis (albeit more on the numerical side than the language side) and never fail to be impressed by their constantly improving capability, accuracy, and speed. There is no doubt that the language side of AI will evolve rapidly as well, and that even since ChatGPT’s launch GPTs have significantly improved. A world in which many or even most jobs were occupied by independently trained AI iterations would not be surprising to me.
However, as investors we must concern ourselves not just with the overall promise of a technology, but with its promise relative to its valuation. Right now, AI’s valuation almost requires it to take over the world.
And that leads me to a question that’s become increasingly front of mind lately: isn’t 2.5 years enough time? Yes, big businesses move slow, especially publicly traded ones, and firing large swathes of your workforce and replacing them with a newly developed technology is a controversial step that requires lots of consideration beforehand. But ultimately Corporate America is driven by profit above all else, and they know how much money they could save by making this kind of switch. But they haven’t. So they must know something that the market doesn’t. Is AI too inaccurate, unreliable, willing to spin fictions from thin air? Is it actually more expensive than humans? Is it too generalized for the sector-specific knowledge businesses need most? Only the executives can answer these questions, but after ChatGPT’s incredible product launch, I expected mass layoffs to begin within six months, and after two and a half years, they still haven’t.
Maybe we humans aren’t so disposable after all.
A Wall Street Legend’s Take
Goldman Sachs head of global equity research Jim Covello may not be a household name, but he’s been a successful investor in tech stocks for longer than many Nvidia fanboys have been alive. Institutional Investor discusses Covello’s take on AI, which has stuck with me since I read it almost a year ago. Avoiding the complex issues regarding the promise of the technology itself and its potential drawbacks, Covello asks two simple questions about AI that, in his view, don’t have satisfactory answers:
What trillion dollar problem will AI solve? Compare technologies like the Internet, which led to the replacement of inefficient brick and mortar stores with a leaner and more efficient supply chain. Or the development of the automobile, which enabled humans and goods to travel far greater distances in less time, leading to a shift from a local to a globalized world. The cotton gin and mechanical reaper greatly decreased the amount of labor needed on farms, leading to a shift from agriculture to industry. For AI to be as big as these technologies, it needs to solve a problem of an equal size, but it’s hard to think of what that could be. Compared to these major advancements, replacing a few customer service agents here and there doesn’t sound so exciting.
How will AI’s users get around its high costs? Again Covello, who was an investor back when the Internet was developed and rose to prominence, compares AI to the Internet. The cost advantage of the Internet is easy to see: running, say, a clothing store, on the Web is far less expensive than running a brick and mortar clothing store, leading to lower costs for consumers and higher profits for the business. The Internet has made pretty much everything cheaper. But AI is tremendously expensive: it’s a hog of power, water, and computing resources, and platforming a free version of ChatGPT was so expensive for OpenAI that they had to go to Microsoft for an investment within weeks. Says Covello, “We’ve found that AI can update historical data in our company models more quickly than doing so manually, but at six times the cost.”
The entire article is well worth a read.
The Technology’s Issues
Amid the chorus of voices discussing the existential threats to humanity from AI (and I’m not downplaying those—I think they’re quite serious), it’s been easy to miss out on other prominent people calling attention to the fact that current products have some serious drawbacks. Although chatbots excel at certain tasks (as someone who writes code for a living, their programming ability is pretty scary), they do worse at others. They have a tendency to fabricate when they don’t know the answer. And many of their answers are just well-written regurgitations of training data: there’s little evidence that they can perform well at tasks that require the creation of new and original content.
But I’m no expert, so don’t listen to me. Listen to five Apple researchers who published a paper in late 2024 that called attention to some serious issues with the most advanced GPT technologies available at the time. Here are just a few of the issues they found:
Mathematical and logical reasoning in the models is poor, partly because they attempt to recreate approaches seen in the training data rather than using the most appropriate approach for each specific problem.
Adding additional complexity to questions (see example below) causes performance to drop off even further.
Even without additional complexity, the models’ performance on basic questions tops out at around 90%.
Adding small bits of “noise” (irrelevant information—for instance, in a question about purchasing bread, adding our plans to donate some of the bread) to prompts can also cause steep drop-offs in performance.
Lest you think Apple’s researchers somehow tricked the models by giving them overly complex questions, here’s one of the questions with maximum complexity added:
This is a question that can be solved easily by a human in under 30 seconds without a calculator. And yet many of the world’s leading AI models got this category of question wrong more than half the time. This has obvious implications for the real world, where problems are not cut and dried: they can be complex, specific, and individual. If AI is put in a situation where it can’t rehash and instead has to create an original solution, it may perform poorly. This is bad news because the difficult problems that have to be solved by salespeople, customer service agents, and others almost always require creativity.
Conclusion
While not an expert on GPTs, I have endeavored to show that their path to world domination is not nearly as certain as their cheerleaders would have us believe. It does not require much proof to say that at these valuations, a nearly 100% chance of said world domination is already priced in. That isn’t a good investment opportunity: it’s better to invest in battered, underrated stocks for value, and things that people don’t see coming for growth.
AI may still be a big part of our lives in twenty years, ten years, or even five, but it may take longer than people think, and not perform as well as expected on some tasks. A major issue is cost, as Covello says: GPTs are so demanding on resources that the large tech companies are actually planning to create new nuclear reactors to power them. Most new technologies that take off have a cost advantage, so that’s already a handicap for GPTs. Additionally, the technology will likely face major resistance in hogging this much energy for what amounts to little societal benefit.
It’s important to note that I’m not taking anything away from the technology here. I use ChatGPT on a near-daily basis. I’ve written here on Substack about the existential threat AI poses to Google, and worry just as much as my fellow technologists about future AI’s potential impact on employment and the political system. But as an investor, I think that as with any bubble, the promise of the technology has been trumpeted in the media far more than its many downsides and challenges. Hopefully this article goes a small way toward changing that.