Skip to main content
Now Top Investors Are Doubting the AI Revolution
bubble
Investors like Howard Marks and the Big Short’s Steve Eisman are among those questioning the AI boom.
By Michelle Celarier February 6, 2026

Jeff Currie has a message for investors in the so-called AI revolution. “Looking back at major technological booms, from railroads to the internet to shale, you see similar patterns emerge over time,” he explains. 

Even though the technologies can be transformative, investors can still lose a lot of money. Currie puts it more mildly, “Equity investors have not always seen returns that match early expectations.”

Generative AI is widely touted as a technology that will transform our world for good (or perhaps for ill). There is little question that it has captured the zeitgeist: Last year the top AI-related stocks commanded 40 percent of the market capitalization of the S&P 500, fueling an estimated 75 percent of the market’s gains. Not surprisingly, AI investment also accounted for almost half of GDP in the U.S., according to the latest official figures.

For example, Nvidia, whose AI chip business started the boom in late 2023, is now worth almost $5 trillion, its stock having risen almost 300 percent since it took off just over two years ago. OpenAI, the privately held company that created ChatGPT, is valued at $500 billion. But as the sums spent to build out this brave new world’s data centers grow into the trillions, the warnings are also gaining currency. 

“There’s a level of skepticism that’s starting to go higher and higher and be louder and louder,” says Herb Greenberg, whose Substack “On the Street” is known for its “red flag alerts” that spot troubled companies. What’s interesting, he says, are the types of people who are speaking up. 

Some of the most surprising, he suggests, are those from the private equity world given that the VC world is now dominated by AI startups. Such critics include Carlyle’s Currie.

Currie’s case against AI is based in part on an analysis showing parallels between the current AI mania and the shale oil boom of the early 2010s. When a glut in oil led to a collapse in oil prices, the shale boom ended in a massive rout, wiping out $2.6 trillion in equity.

Shale oil and generative AI, while seemingly opposite ends of the investment spectrum, actually “rhyme,” he contends.

“The technology ecosystem ultimately rests on substantial physical infrastructure and commodity inputs,” he tells Institutional Investor. “Data centers require significant capital investment, similar to energy infrastructure. Historically, businesses linked to those assets have tended to trade at lower valuation multiples than pure technology companies.”

The oil shale boom was “the most notorious growth-at-all-costs capex cycle in the modern era, where energy industry-wide capex reached 110 to 120 percent of cash flow at its peak,” says Currie.  On the eve of the 2014 oil price crash, some energy companies were spending north of 200 percent of cash flow. Now he says, some AI-related companies are spending close to 100 percent of theirs. Even cash-rich so-called hyperscalers like Meta and Google have been spending 60 percent of their cash flow to build out data centers, and recent earnings reports show that the percentage is rapidly increasing.

.The sunk costs and long lead times to build out the shale oil infrastructure meant it was obsolete by the time it went onstream, leaving the builders crippled with debt, leases, and contracts, Currie explained in a recent Carlyle commentary. 

AI is facing similar issues. “Are these chips going to be viable five years from now, 10 years from now?” Currie asks. Prices for the core product are already plunging. In 2024, the industry targeted a floor price of $4 an hour to rent the chips required to run sophisticated AI models, but today the price is already down to around $2, he explains. “The phrase ‘price war’ is already being used.”

“I don’t know anything about [the tech] industry, but I do know there’s a commodity attached to all those data centers, and it’s treated just like oil and it’s produced with lots of physical capital, just like oil,” Curie says.

Then there’s the financial engineering. Currie argues that “Big Tech AI appears to be using the exact same playbook that the energy industry used” with regards to off-balance financing that allows players to outsource much of their capital expenditures. For example, OpenAI has a relatively debt-light balance sheet because the majority of the debt used to finance its infrastructure needs is held by its partners and those lending to them. Its partners, including Oracle, have about $100 billion in debt through these off-balance sheet arrangements. 

“That’s equivalent to the net debt held by the six largest corporate borrowers in the world combined,” says Richard Norman, a founding partner at S3 Partners. “Let that sink in.”

These deals have raised investors’ concerns about systemic risk because while OpenAI’s balance sheet looks clean, the banks are ultimately on the hook should AI’s future turn out to be less rosy than anticipated. 

Another PE pro, Apollo Chief Economist Torsten Slok, recently published a note citing Census Bureau figures showing that AI adoption rates started to flatten out across all firm sizes in May of 2025. Among big companies, the adoption rates are actually going down. “Big companies are what matter, because that is where the dollars are,” says Greenberg.

Industry legends like Howard Marks, the founder of Oak Tree Capital Management, which runs private equity and credit funds, also have words of caution for those caught up in the mania. “Memories are short, and prudence and natural risk aversion are no match for the drama of getting rich on the back of a revolutionary technology that ‘everyone knows’ will change the world,” he wrote in a December 17-page treatise titled “Is It a Bubble?” 

Marks, who presciently called both the dot-com and subprime mortgage bubbles, focused on so-called circular deals, which have become common in the AI world and were also a problem in the telecom boom of the late 1990s. 

“Nowadays deals are being announced in which money appears to be round-tripped between AI players. People who believe there’s an AI bubble find it easy to view these transactions with suspicion. Is the purpose to achieve legitimate business goals or to exaggerate progress?” he asked. He noted that both OpenAI and Nvidia are engaged in this practice, with Goldman Sachs estimating that Nvidia will make 15 percent of its sales next years from such circular deals. (The partners in these days are also where the off-balance sheet financing occurs.)

While circular deals are commanding a lot of investor attention now, short seller Nate Koppikar, the founder of Orso Partners, first noticed the practice more than two years ago in what he described to II as the “grift shift.”

Koppikar was one of the first investors to predict the cascading downturn in tech and growth stocks in 2022, based in part on the interdependency of the companies. By the summer of 2023, he had discovered something similar in that highly touted AI start-ups were backed by companies like Nvidia, which needed them to buy its chips. “The big tech companies are driving this. They fund the start-ups and make them buy products from them. It’s a recycling of cash,” he told II at the time. Koppikar singled out CoreWeave, which went public in 2024 and has since become the poster child for circular deals due to its ties to Nvidia.

One short seller who requested anonymity says he is shorting a few AI companies and thinks their voracious demand for electricity may prove to be the catalyst for the stocks to fall. “Electricity prices have gotten high enough that people are pushing back,” he says. Whereas people in states like Texas are used to low electricity prices, the demand AI data centers are making on their power grids are causing “cost inflation” in electricity. 

“Historically, high electricity prices have burst tech bubbles” he says.

Short sellers are often ahead of the curve. But for the most part they have shied away from shorting companies like Nvidia for fear that the euphoria would wipe them out. 

Take Steve Eisman, an investor featured in the “Big Short” (and played by Steve Carrell in the movie) who was shorting mortgage-backed securities as a portfolio manager at Front Point Partners in the leadup to 2008. Eisman recently told CNBC that he currently owns big AI stocks like Nvidia, Microsoft and Meta, but he’s getting nervous about doing so.

Eisman said he has been paying attention to the writings of Gary Marcus, who is an early critic of the large language models that dominate the AI conversation. 

Marcus argues that “the large language models, as they keep scaling, which is the model that everybody has, will start to lose their efficaciousness,” Eisman said on CNBC. “The improvement is going to slow as opposed to increase ... At some point, companies like Microsoft — if this becomes true — they’re going to start buying fewer chips.”

Eisman said he began reading Marcus nine months ago when “he was a lone wolf against Sam Altman [the founder of OpenAI] and everyone else.” But when the latest version of ChatGPT showed less improvement over the prior version than was the case with previous versions, people took notice. “All of a sudden Gary Marcus is not the lone wolf anymore.”

Marcus, a scientist and former NYU professor who has a substack called “Marcus on AI,” is gloating. Noting that ChatGPT is “one of the fastest-growing consumer products in history, and gotten more press than God,” he wrote recently that “a fair case can be made that it is not what it has often been cracked up to be, and probably never will be.”

He cited several studies, including one by McKinsey, showing that adoption of generative AI is less pervasive than expected, and that its results are also underwhelming. 

Marcus said that his criticism of AI years ago nearly cost him his career. “My target was the then-extremely-popular notion that we could achieve general intelligence simply by ‘scaling’ large language models, sometimes referred to by the slogan ‘scale is all you need,’” which was proposed by people like OpenAI’s Altman. 

The idea was that one could use “massive amounts of data – often derived from human behavior – as a substitute for intelligence,” Marcus said.

He argued it wouldn’t work and that “these systems would never be reliable enough, and that even with more data they would have trouble with hallucinations, factuality, reasoning, outliers, and generalization.” 

“The problem with generative AI has always been that large language models associate patterns together without really understanding those patterns; it’s statistics without comprehension,” he said. 

According to Eisman, Marcus’s theory runs counter to the Wall Street foundational view of the AI trade — “that rising model complexity will justify the enormous spending on computing power that’s driving chip demand.”

He offered yet another analogy: “It’s like the foundational argument before the great financial crisis, where I eventually figured out that the entire mortgage fixed income market rested on one assumption, which was housing prices can’t go down,” he said. “And once that assumption got pulled out, the whole edifice collapsed.”

Marcus has warned of a similar scenario happening with AI: “The economy itself has become so wrapped up in generative AI and its promises, that the economy itself is, by many accounts in serious jeopardy.” He cited a post on X by David Sacks, the Trump administration’s AI guru and a prominent Silicon Valley investor, who warned that “a reversal would risk recession. We can’t go backward.”  

“If the economy goes down, ChatGPT will be at the center of the mess,” Marcus wrote in November, adding that “we could easily wind up with too many data centers, and a lot of chips that rapidly lose their value.” He said that the most likely scenario is that government bailouts will be required because AI will be deemed “too big to fail.”

While not going that far, even some in the tech industry are starting to agree that chip prices are likely to fall. For example, Salesforce CEO Marc Benioff wrote recently on X that “LLMs are the new disc drives, commodity infrastructure, you hot swap for whoever’s cheapest, plus best. The fantasy that the model is a moat just expired.”

Michael Burry, another hedge fund manager made famous by “The Big Short,” has also jumped into the debate. He shut down his hedge fund so he could short Nvidia without worrying about his investors and revealed his thesis on his new Substack, Cassandra Unchained, in late November.   

Burry doesn’t argue whether generative AI will change the world. His arguments are largely technical ones that revolve around the depreciation policy of tech giants like Microsoft, Google, Amazon, and Meta, which are Nvidia’s customers. He argues that their data center investments are not being written down fast enough because Nvidia’s one-to-two year product cycles are making older chips obsolete. 

Burry contends that these deals may be inflating demand. And like Marcus, he believes that the resultant massive overbuilding in data centers could lead to a crash. (OpenAI, the maker of ChatGPT that faces competition from Google’s Gemini and Anthropic, has committed hundreds of billions of dollars for data centers.)

Nvidia has pushed back on Burry’s thesis, saying that its products have a long shelf life, among other points. It also recently got permission from the U.S. government to sell chips to China — a potential boost to the company, although China is widely viewed as providing a potential rival to the U.S. industry. Still, Nvidia has hit some headwinds: the stock has fallen nearly 10 percent since peaking in early October. It fell again on Monday following reports that the company’s $100 billion investment into OpenAi is on hold, due to Nvidia’s concerns about OpenAI’s business strategy. (On Friday, the stock rose 7 percent after CEO Jensen Huang said the enormous spending by Nvidia customers, including Microsoft, Amazon, and Meta was “justified” and “sustainable.”)

That Burry has put his money where his mouth is means something to Greenberg. “His willingness to tilt against the herd… with his own money based on what he believes is admirable,” he says. “I think you have to pay attention, because the one thing we know here is that nobody really knows where this is all headed.”

Richard Norman
Jeff Currie
U.S.
Herb Greenberg
S3 Partners

Related Articles