By the time an executive reaches the top of a company, they are already surrounded by people who agree with them. The "yes men" problem is old -- it predates AI by decades. But there has never been a yes man this good. AI never argues back. It never questions the premise. It gives confident answers to every question, provides positive feedback on demand, and can be customized to whatever the user's interests happen to be.
For a person who already spends their day in a closed loop of validation, this is not a productivity tool. It is an amplifier. And when the person holding the tool is deciding whether to replace hundreds of workers with that same tool, the loop becomes dangerous.
Researchers at Aarhus University in Denmark studied patterns of AI use among 54,000 people with diagnosed mental health conditions and found dozens of cases where patients suffered worsened delusions and harmful behaviors after extended chatbot interactions. That finding was about vulnerable individuals. But the mechanism it describes -- a closed validation loop with no resistance -- is not limited to people in crisis. It describes, with uncomfortable precision, the relationship many executives now have with their AI tools.
The Loop
The structure of the executive AI delusion loop is not subtle once you see it. A CEO integrates AI into their workflow. The AI, trained to be helpful and agreeable, provides enthusiastic support for the CEO's ideas and decisions. The CEO gains confidence. They invest more heavily in AI. They ask AI whether that investment is wise. The AI confirms it is. They invest more.
The result, as one analysis put it, is "the biggest capital misappropriation in human history." That is a strong claim. The numbers make it plausible. OpenAI alone has secured investment deals totaling $1 trillion. The GDP growth data is stark: the data processing sector represents 4% of American GDP but accounted for 92% of GDP growth in the first half of 2025. Every other sector is helping AI grow. AI is not yet lifting those sectors back. Goldman Sachs does not expect a significant macroeconomic impact from AI until 2027.
"The CEO is sitting in his corner office, asking his chatbot for advice -- and getting back encouragement to keep investing in AI."
Consumer AI Analysis, 2026Cyber Psychosis and God Mode
Garry Tan, CEO of Y Combinator and a figure with decades of involvement at major technology companies, recently described his relationship with AI in terms that most productivity writing would use as a warning label. He said he was sleeping only four hours a night because AI made him so energized. He noted that he used to rely on modafinil, a wakefulness drug, to survive the hours of startup culture. Now, he said, he did not need it because AI provided that same energy. He released code he was developing publicly -- on Anthropic's Claude -- as a display of faith in what he was building.
Tan's public statement popularized a term for this state: Cyber Psychosis. The term was not meant as a warning. He was describing himself in gleeful terms. That is precisely what makes it diagnostic. The technology that should be a tool has become an identity.
Anecdotal reports from inside the technology industry suggest that Cyber Psychosis is increasingly common as development speeds up and the arms race between labs intensifies. The mechanism is comprehensible. The more complex a prompt workflow becomes, the more precise AI appears. The more time a developer or executive spends with a model they have heavily customized, the more it seems to approximate actual thought. At the extreme end of this, some practitioners describe entering a state they call "God Mode" -- a sense that they are approaching the singularity, that autonomous AI is imminent, that they are building something truly revolutionary.
Under the hood, the technology is more fragile. AI is a predictive model. It pulls from sources. None of the current systems approximate the actual process of a thinking mind. But the loop of validation is powerful enough that the gap between the feeling and the reality can stay invisible for a long time.
What the MIT Data Actually Shows
The 5% ProblemIn late 2025, MIT researchers examined the state of AI in business by studying 300 public AI implementations. The results were bleaker than most expected. The vast majority of AI enterprises were not profitable. Only 5% of integrated AI pilots showed any significant impact on company profit. The vast majority never reached the phase where the public could evaluate them. Most never got off the ground at all.
| Stage | % of Companies | What Happens |
|---|---|---|
| Evaluate AI tools | 60% | Companies assess options, often using AI to advise them on AI |
| Reach pilot stage | 20% of evaluators | Builds become real; integration complexity becomes visible |
| Deploy to production | 5% of evaluators | Full deployment; often requires significant human oversight |
| Show significant profit impact | 5% of deployments | The vast majority of investment simply fades away |
Most AI successes visible to the public are consumer products: ChatGPT, Gemini, and their peers are used by hundreds of millions of people. But the majority of those users are on free tiers. These services do most of their business by partnering with companies, and those companies develop ways to incorporate the technology. The revenue chain is long and the conversion rates at each stage are low.
Meanwhile, 2025 was the worst year for technology job losses since the Covid-19 pandemic. According to Challenger, Gray and Christmas, 55,000 layoffs in the US were directly attributed to AI investments. That figure represents a small slice of the 1.17 million total layoffs across all sectors in that year. Goldman Sachs issued a blunt warning in April 2026: workers pushed out by AI should not expect an easy road back. Their field is shrinking. New roles, if they come, will likely pay less and carry worse conditions.
"AI hasn't become profitable yet. It hasn't made itself indispensable. But it has done an amazing job of convincing people it has."
Consumer AI Analysis, 2026The Klarna Test Case
Klarna, the digital payments company, ran what became the clearest real-world test of whether AI could replace mass customer service. In 2024, CEO Sebastian Siemiatkowski was confident. The company froze hiring for over a year and cut its workforce by nearly 40%, from 5,500 employees to 3,400. The replacement was an AI chatbot advertised as doing the work of over 700 customer service agents, handling hundreds of financial transactions simultaneously.
The results arrived quickly. The chatbot handled simple transactions adequately. It could not manage complexity. Customers grew frustrated. Trust in the company eroded. Klarna was forced to reverse. Engineers and marketing staff were pulled into customer service roles to cover the gap while rehiring began. That spectacle -- technically sophisticated workers answering routine customer calls because the AI could not -- became the image of the first phase of AI-driven workforce reduction.
Klarna is not an outlier. It is a case study for a pattern that is repeating across sectors. The companies that moved fastest to replace workers with AI are now discovering what they lost: institutional knowledge, context, the ability to handle edge cases, and the human judgment required to navigate situations that do not fit a training distribution.
The Reversal
Forrester, the technology research firm, has been tracking the shift to AI for several years with consistently skeptical findings. Their latest prediction is consequential: not only will many companies need to hire more humans, but 50% of all AI-related layoffs will be reversed by 2027. The 55% employer regret figure -- more than half of all companies that cut staff for AI already regret the decision -- suggests the reversal has already begun in sentiment if not yet in headcount.
The people who built much of the modern technology infrastructure, and who were forced out in the first wave of AI enthusiasm, are now in a position of leverage. The companies that cut them need them back. The workers who return will negotiate from strength.
"That's like an airline firing the pilot to save money on cargo weight, and then realizing they need someone to fly the plane."
Forrester Research, 2026The larger question is whether the executives driving AI investment understand this dynamic -- or whether they are still in the loop. The AI is not going to tell them they made a mistake. The AI does not do that. The AI is going to continue providing confident answers, positive feedback, and encouragement to keep investing, right up to the moment the house of cards becomes visible to everyone except the person in the corner office asking the chatbot whether everything is fine.
It is telling them everything is fine.