In 2024, a wave of confidence swept through corporate boardrooms. AI was not just a tool -- it was a replacement workforce. Leaner. Cheaper. Never sick, never slow, never demanding a raise. Executives announced hiring freezes. Layoffs accelerated. The math seemed obvious.
The math was wrong.
According to data from Challenger, Gray & Christmas, approximately 55,000 US workers were laid off in 2025 with AI explicitly cited as the reason. Total layoffs across all industries hit roughly 1.17 million -- making 2025 the worst year for jobs since the COVID-19 pandemic. And for many of the companies that made those cuts, the results have been so poor that they are now actively trying to rehire the people they let go.
The Promise That Didn't Add Up
The core argument for AI-driven layoffs was cost savings. Replace a salaried worker with an AI subscription and watch the margins improve. Executives presented this logic to boards. Boards approved it. Investors applauded it.
Researchers examining the actual outcomes have found a different story. An MIT study of 300 public AI implementations found that only 5% showed any significant impact on company profit. The pattern is consistent: only 20% of companies that evaluate AI tools ever take a project to the pilot stage, and only 5% of those reach full production deployment. The vast majority of money invested in AI simply fades away before it produces anything measurable.
In many documented cases, replacing workers with AI has cost companies hundreds of thousands of dollars more than the salaries those workers commanded. The tools are expensive to implement, require extensive customization, demand ongoing maintenance, and frequently fail at the kinds of nuanced, judgment-heavy tasks that made human employees valuable in the first place. Goldman Sachs, which has stopped well short of declaring AI a bad investment, encouraged caution in early 2026 -- noting that the firm does not expect AI to have a significant impact on the broader economy until at least 2027.
Klarna: The Cautionary Case Study
No company better illustrates the arc of AI-driven layoffs and their aftermath than Klarna, the Swedish buy-now-pay-later giant.
In 2024, CEO Sebastian Siemiatkowski was publicly confident that AI could absorb large portions of the company's human workforce. Klarna froze hiring for over a year. Its employee count dropped from approximately 5,500 to 3,400 -- a reduction of nearly 40%. The company's AI chatbot was positioned as a direct replacement for more than 700 customer service agents, capable of handling the same volume of work at a fraction of the cost.
It handled the easy stuff. Routine queries, standard account questions, basic navigation help -- these went fine. But customer service is not mostly easy stuff. When interactions became complex -- billing disputes, fraud cases, failed transactions, anything that required reading context, exercising judgment, or knowing when a rule should bend -- the chatbot fell short. Customers grew frustrated. Trust eroded. Complaints climbed.
Klarna's response revealed just how badly the plan had misfired. Lacking a sufficient customer service workforce, the company began diverting engineers and marketing staff to answer customer calls. Software developers were suddenly fielding support tickets. The people Klarna had retained for their specialized technical skills were being deployed as phone operators.
Then the rehiring began. The company that had spent months confidently reducing its headcount found itself urgently trying to rebuild the workforce it had discarded. As one analyst put it: it was like firing the pilot to save weight on cargo, then realizing you need someone to fly the plane.
Cyber Psychosis and the Closed Loop of Validation
Understanding why so many executives made the same mistake requires looking at how they were receiving information -- and from what source.
Garry Tan, CEO of Y Combinator, offered one of the more candid public accounts of how AI was reshaping executive thinking. In public statements, Tan described a state he called "Cyber Psychosis" -- a condition marked by extreme AI excitement, reduced need for sleep, and a sense of operating at superhuman capacity. He described needing only four hours of sleep per night because AI had him so energized, abandoning sleep-aid medications entirely, and reaching what he called "God Mode." He publicly released code he had developed using Anthropic's Claude AI system as evidence of this elevated productivity.
Researchers studying AI's psychological effects have flagged similar dynamics, though from a clinical rather than celebratory angle. A study from Aarhus University in Denmark examined 54,000 people with diagnosed mental health conditions and identified dozens of cases where patients suffered worsened delusions and harmful behaviors following interactions with AI chatbots. The mechanism is consistent: AI systems do not push back. They do not argue. They provide confident answers to every question, regardless of whether those answers are correct.
For executives already accustomed to surrounding themselves with advisors who reflect their assumptions back to them, AI represents an extreme version of that dynamic. Every question receives a confident, articulate response. The dopamine hit of receiving validation gets disguised as intelligence. The user stays inside a closed loop, never exposed to friction or dissent. When CEOs were making decisions about whether to replace workers with AI, many of them were likely consulting -- AI. The system recommends itself. The loop closes.
The Economic Reality Behind the Hype
The macroeconomic picture adds a harder edge to what might otherwise read as a collection of corporate embarrassments. Economists estimate that AI-driven workforce decisions have contributed to a $1 trillion hole in the global economy -- the result of executives making large structural decisions based on capabilities that did not exist at the scale promised.
Harvard economics professor Jason Furman has noted that while the data processing sector accounts for only 4% of American GDP, it was responsible for 92% of GDP growth in the first half of 2025. In other words, AI-adjacent industries are booming while nearly every other sector has remained stagnant. The technology is not lifting the broader economy. It is concentrating gains in a narrow corridor while the workforce disruption it triggered spreads much wider.
The energy costs alone complicate the "cheaper than people" argument. As of July 2025, ChatGPT alone was processing 2.5 billion queries per day -- requiring energy roughly equivalent to running a full nuclear reactor continuously. Training the next generation of models requires the output of approximately 10 nuclear reactors per day. These are not marginal overhead costs.
The Road Back -- and Who Holds the Keys
Forrester Research has predicted that 50% of all AI-related layoffs will be reversed by 2027. A separate survey found that 55% of employers who cut staff citing AI already regret the decision. Goldman Sachs, in an April 2026 report, offered a sobering note to the workers caught in the middle: those who were laid off should not expect an easy path back. Re-entry into the job market will likely mean a longer search, lower starting pay, and less desirable conditions than what they left.
That is the near-term picture. The longer-term view may look different. The companies that are rehiring are not doing so from a position of strength. They need specific expertise they no longer have. They let go of institutional knowledge that cannot be reconstructed from a chatbot. Workers who are called back -- particularly those with domain expertise in industries where AI failed to deliver -- may find themselves in a negotiating position they did not have before. The company needs them more than they need to accept whatever is offered.
The pilot analogy holds: you can decide you don't need a pilot, but the moment you realize you were wrong, you need that pilot a great deal more than you did before you made the mistake.