Save StorySave this storySave StorySave this story
AI may not simply be “a bubble,” or even an enormous bubble. It may be the ultimate bubble. What you might cook up in a lab if your aim was to engineer the Platonic ideal of a tech bubble. One bubble to burst them all. I’ll explain.
Since ChatGPT’s viral success in late 2022, which drove every company within spitting distance of Silicon Valley (and plenty beyond) to pivot to AI, the sense that a bubble is inflating has loomed large. There were headlines about it as early as May 2023. This fall, it became something like the prevailing wisdom. Financial analysts, independent research firms, tech skeptics, and even AI executives themselves agree: We’re dealing with some kind of AI bubble.
But as the bubble talk ratcheted up, I noticed few were analyzing precisely how AI is a bubble, what that really means, and what the implications are. After all, it’s not enough to say that speculation is rampant, which is clear enough, or even that there’s now 17 times as much investment in AI as there was in internet companies before the dotcom bust. Yes, we have unprecedented levels of market concentration; yes, on paper, Nvidia has been, at times, valued at almost as much as Canada’s entire economy. But it could, theoretically, still be the case that the world decides AI is worth all that investment.
What I wanted was a reliable, battle-tested means of evaluating and understanding the AI mania. This meant turning to the scholars who literally wrote the book on tech bubbles.
In 2019, economists Brent Goldfarb and David A. Kirsch of the University of Maryland published Bubbles and Crashes: The Boom and Bust of Technological Innovation. By examining some 58 historical examples, from electric lighting to aviation to the dotcom boom, Goldfarb and Kirsch develop a framework for determining whether a particular innovation led to a bubble. Plenty of technologies that went on to become major businesses, like lasers, freon, and FM radio, did not create bubbles. Others, like airplanes, transistors, and broadcast radio, very much did.
Where many economists view markets as the product of sound decisions made by purely rational actors—to the extent that some posit that bubbles don’t exist at all—Goldfarb and Kirsch contend that the story of what an innovation can do, how useful it will be, and how much money it stands to make creates the conditions for a market bubble. “Our work puts the role of narrative at center stage,” they write. “We cannot understand real economic outcomes without also understanding when the stories that influence decisions emerge.”
Goldfarb and Kirsch’s framework for evaluating tech bubbles considers four principal factors: the presence of uncertainty, pure plays, novice investors, and narratives around commercial innovations. The authors identify and evaluate the factors involved, and rank their historical examples on a scale of 0 to 8—8 being the most likely to predict a bubble.
As I began to apply the framework to generative AI, I reached out to Goldfarb and asked him to weigh in on where Silicon Valley’s latest craze stands in terms of its bubbledom, though I should note that these are my conclusions, not his, unless stated otherwise.
Uncertainty
In 1895, the city of Austin, Texas, purchased 165-foot-tall “moonlight towers” and installed them in public hot spots. The towers were equipped with arc lighting, which burned carbon filaments. Spectators gathered to stare up in awe as ash rained down upon them.
With some technologies, Goldfarb says, the value is obvious from the start. Electric lighting “was so clearly useful, and you could immediately imagine, ‘Oh, I could have this in my house.’” Still, he and Kirsch write in the book, “as marvelous as electric light was, the American economy would spend the following five decades figuring out how to fully exploit electricity.”
“Most major technological innovations come into the world like electric arc lighting—wondrous, challenging, sometimes dangerous, always raw and imperfect,” Goldfarb and Kirsch write in Bubbles. “Inventors, entrepreneurs, investors, regulators, and customers struggle to figure out what the technology can do, how to organize its production and distribution and what people are willing to pay for it.”
Uncertainty, in other words, is the cornerstone of the tech bubble. Uncertainty over how the stories entrepreneurs tell about an innovation will translate into real business, which parts of a value chain it might replace, how many competitors will flock to the field, and how long it will take to come to fruition. And if uncertainty is the foundational element to a tech bubble, alarm bells are already ringing for AI. From the beginning, OpenAI’s Sam Altman has bet the house on building AGI, or artificial general intelligence—to the point where he once addressed a crowd of industry observers who asked him about OpenAI’s business model, and told them with a straight face that his plan is to build a general intelligence system and simply ask it how to make money. (He has since moved away from that bit, saying AGI is not “a super useful term.”) Meta is aiming for “superintelligence,” whatever that means. The goal posts keep on moving.
In the nearly three years since AI took center stage in Silicon Valley, the major players, with the exception of Nvidia, whose chips would likely still be in use post-bust, still haven’t demonstrated what their long-term AI business model will be. OpenAI, Anthropic, and the AI-embracing tech giants are burning through billions, inference costs haven’t fallen (those companies still lose money on nearly every user query), and the long-term viability of their enterprise programs are a big question mark at best. Is the product that will justify hundreds of billions in investment a search engine replacement? A social media substitute? Workplace automation? How will AI companies price in the costs of energy and computing, which are still sky-high? If copyright lawsuits don’t break their way, will they have to license their training data, and will they pass on that additional cost to consumers? A recent MIT study made waves—and helped stoke this most recent wave of bubble fears—with a finding that 95 percent of firms that adopted generative AI did not profit from the technology at all.
“Usually over time, uncertainty goes down,” Goldfarb says. People learn what’s working and what’s not. With AI, that hasn’t been the case. “What has happened in the last few months,” he says, “is that we've realized there is a jagged frontier, and some of the earliest claims about the effectiveness of AI have been mixed or not as great as initially claimed.” Goldfarb thinks the market is still underestimating the difficulty of integrating AI into organizations, and he’s not alone. “If we are underestimating this difficulty as a whole,” Goldfarb says, “then we will be more likely to have a bubble.”
AI’s closest historical analogue here may be not electric lighting but radio. When RCA started broadcasting in 1919, it was immediately clear that it had a powerful information technology on its hands. But less clear was how that would translate into business. “Would radio be a loss-leading marketing for department stores? A public service for broadcasting Sunday sermons? An ad-supported medium for entertainment?” the authors write. “All were possible. All were subjects of technological narratives.” As a result, radio turned into one of the biggest bubbles in history—peaking in 1929, before losing 97 percent of its value in the crash. This wasn’t an incidental sector; RCA was, along with Ford Motor Company, the most high-traded stock on the market. It was, as The New Yorker recently wrote, “the Nvidia of its day.”
Pure Play
Why is Toyota valued at $273 billion while Tesla is worth $1.5 trillion to investors—when Toyota shipped more cars than Tesla last year, and brought in three times as much revenue? The answer is tied to Tesla’s status as a “pure-play” investment in electric (and to a lesser extent, autonomous) cars. In the 2010s, Elon Musk harnessed all the exciting uncertainty around EVs to tell a story about a future free of internal combustion engines that was so alluring that investors were willing to bet enormously on a volatile startup over proven workhorses. A pure play company is one whose fate is bound to a particular innovation panning out, about which entrepreneurs might tell more exciting and fantastic stories, and you need them for a bubble to inflate. They’re the vehicle through which narratives turn into material bets.
So far this year, according to Silicon Valley Bank, 58 percent of all VC investment has gone to AI companies. There aren’t a ton of obvious pure-play investments available to retail investors—another criteria for pumping up a bubble—but there are some big ones. Nvidia is at the top of the list, having staked its future on building chips for AI firms, and becoming the first $4 trillion company in history in the process. When a sector is seeing a lot of pure plays, according to Goldfarb and Kirsch’s framework, it’s more likely to overheat and have a bubble. SoftBank has plans to sink tens of billions of dollars into OpenAI, the purest AI play there is, though it’s not yet open to retail investments. (If and when it finally is, analysts speculate that OpenAI may become the first trillion-dollar IPO.) Investors have also backed pure-play companies such as Perplexity (now valued at $20 billion) and CoreWeave ($61 billion market cap). In the case of AI, these pure-play investments are especially worrying, because the biggest companies are increasingly bound up with one another. Nvidia just announced a $100 billion proposed investment in OpenAI, which in turn relies on Nvidia’s chips. OpenAI relies on Microsoft’s computing power, the result of a $10 billion partnership, and Microsoft, in turn, needs on OpenAI’s AI models.
“The big question is how much of that is in the private markets, and how much of that is in the public markets?” Goldfarb says. If most of the money is in private markets, then it’s mostly private investors who would lose their shirts in a crash. If it’s mostly in public markets, such as stocks and mutual funds, then the crash would bleed regular people’s pensions and 401(k)s. And guess what: It’s increasingly creeping into public markets. (Many market watchers have also been pointing to the rise of private credit as an increasing source of systemic risk, as more small investors have been able to dump their money into opaque deals over the past year.) Either way, the sums are huge. As of late summer 2025, Nvidia accounts for about 8 percent of the value of the entire stock market.
Novice Investors
Twenty-five years ago, on March 10, 2000, the stock market hit a milestone: The tech-heavy Nasdaq reached a then-high of 5,132 units. At the time it appeared to merely be continuing its rapid ascent—it had risen an astonishing 86 percent in the previous year alone—buoyed by an investor gold rush for internet companies like eToys, CDNow, Amazon, and, yes, Pets.com.
Today, hordes of novice retail investors are pumping money into AI through E-Trade and their Robinhood app. In 2024, Nvidia was the single most-bought equity by retail traders, who plowed nearly $30 billion into the chipmaker that year. And AI-interested retail investors are similarly flocking to other big tech stocks like Microsoft, Meta, and Google.
Most of the investment thus far is fueled by institutional investors, but along with Nvidia and the giants, more pure-play—and more risky—AI startups like CoreWeave are going public or preparing to go public. CoreWeave’s March IPO was initially seen as lackluster, but it’s been on the rise since, as another way for retail investors to push money into AI.
As Goldfarb points out, everyone is something of a novice investor when it comes to AI, because it’s such a new field and technology, because there’s so much uncertainty, because no one knows how it’s going to play out. What makes today different from 100 years ago, Goldfarb and Kirsch note in the book, is that anyone can get in on the action. A hundred years ago, stocks were simply too expensive for most working people to buy, which sharply limited the capacity to inflate bubbles (though that didn’t stop the Depression from happening). Now there are stocks of every size and stripe available to purchase with a tap on a Robinhood app; and with the casino-ification of the economy, the breakdown of a meaningful regulatory apparatus to rein in all of the above—well, it has all come just in time to give novice investors a vehicle to sink their savings into the vague promise of superintelligence.
Coordination or Alignment of Beliefs Through Narratives
In 1927, Charles Lindbergh flew the first solo nonstop transatlantic flight from New York to Paris. The aviation industry had been underwritten by government subsidies for a quarter of a century by then, but the flight made news around the world. It was the biggest tech demo of the day, and it became an enormous, ChatGPT-launch-level coordinating event—a signal to investors to pour money into the industry.
“Expert investors appreciated correctly the importance of airplanes and air travel,” Goldfarb and Kirsch write, but “the narrative of inevitability largely drowned out their caution. Technological uncertainty was framed as opportunity, not risk. The market overestimated how quickly the industry would achieve technological viability and profitability.”
As a result, the bubble burst in 1929—from its peak in May, aviation stocks dropped 96 percent by May 1932.
When it comes to AI, this inevitability narrative is probably the easiest and clearest one to mark as a huge affirmative on the bubble matrix. There’s no bigger narrative than the one AI industry leaders have been pushing since before the boom: AGI will soon be able to do just about anything a human can do, and will usher in an age of superpowerful technology the likes of which we can only begin to imagine. Jobs will be automated, industries transformed, cancer cured, climate change solved; AI will do quite literally everything. Add in the industry narrative that we have to “beat” China to AGI, and thus must not regulate AI at any cost, and you have even more fuel on the fire.
“Is this a good story?” Goldfarb says. “The answer is profoundly yes.”
What aviation would be good at—moving people from one place to another, much more quickly than was possible with cars, trains, or horses—was clear enough early on. This is what elevates AI bubbledom to another level: The promise of AI, to investors, is nearly infinite. It’s beyond uncertain. It’s unknowable. And we should note that AI arrived after much of a decade of near-zero interest rate policy that led Silicon Valley investors to place bets on companies with little to speak of when it came to business models, but boasting big narratives. Uber, the poster child startup of the era, founded in 2009, did not post a profitable year until 2023. And the AI narrative is ‘Uber for X’ on hallucinogenic steroids. Different parts of the AI story, whether it’s, say, ‘AI will cure cancer’ or ‘AI will automate all jobs’, appeal to investors and partners of every stripe, making it uniquely powerful in its bubble-inflating capacities. And so dangerous to the economy.
It’s worth reiterating that two of the closest analogs AI seems to have in tech bubble history are aviation and broadcast radio. Both were wrapped in high degrees of uncertainty and both were hyped with incredibly powerful coordinating narratives. Both were seized on by pure play companies seeking to capitalize on the new game-changing tech, and both were accessible to the retail investors of the day. Both helped inflate a bubble so big that when it burst, in 1929, it left us with the Great Depression.
So yes, Goldfarb says, AI has all the hallmarks of a bubble. “There’s no question,” he says. “It hits all the right notes.” Uncertainty? Check. Pure plays? Check. Novice investors? Check. A great narrative? Check. On that 0-to-8 scale, Goldfarb says, it’s an 8. Buyer beware.
Update 10/27/25 3:45pm ET: Due to an editing error, an earlier version if this story was initially published.


































