The simplest way to misunderstand the AI backlash is to treat it as ignorance. The people pushing back are not technophobes who do not understand how large language models work. Many of them use AI daily. They are making a judgment, and increasingly, it is a negative one.
The numbers are stark. Americans who believe AI will have a negative impact on society have risen from 34% in December 2024 to 47% by mid-2025, according to polling tracked across multiple surveys. Pew Research finds only 10% of Americans describe themselves as "more excited than concerned" about AI. An NBC poll placed AI at 46% negative sentiment, less popular than ICE and, in that same poll, less popular than President Trump.
"AI is now one of the most unpopular things in America," observes the YouTube channel House of El, run by a presenter with a PhD in computer science who has been tracking AI public sentiment trends. "Here's a paradox for you. Americans increasingly hate artificial intelligence, but they can't stop using it."
It Is Not a Knowledge Gap. It Is a Values Gap.
Stanford's 2026 AI Index documents a 46-percentage-point chasm between AI expert sentiment and public sentiment: 56% of AI researchers and practitioners view AI development positively, against only 10% of the general public. The common interpretation is that the public needs better AI education. The data suggests something different.
"AI experts tend to think about capabilities, benchmarks, what the technology can actually do. The public thinks about jobs, costs, whether their kids' education will be worth anything, whether artists can still make a living. And on those metrics, AI is failing to deliver benefits that justify the current costs."
- House of El (PhD, CS), "The AI Backlash Is Getting Worse"The public is not wrong to evaluate AI on those terms. Those are the terms that matter to people's lives. And by those measures, the picture is genuinely mixed at best and deteriorating at worst. 73% of Americans support mandatory government pre-approval of advanced AI models before they can be deployed. 82% distrust tech executives to self-regulate. These are not the numbers of an uninformed population. These are the numbers of a population that has been paying attention.
The Youngest Generation Is the Angriest
The backlash is not being led by older workers protective of obsolete skills. Gen Z, the first generation that grew up with smartphones, that was explicitly trained to enter the digital knowledge economy, is the most hostile demographic toward AI, and it is getting more hostile faster than any other group.
Gen Z anger toward AI rose from 22% to 31% in a single year (Gallup), while excitement collapsed from 36% to 22%. According to Mercer's Global Talent Trends survey of 12,000 workers, Gen Z is 129% more likely than older workers to fear AI job obsolescence. And 49% of Gen Z job seekers believe AI has already devalued their college degree, before they have deployed those skills in a full career.
This is not abstract anxiety. It is a lived experience of watching the professional on-ramp get narrower. Junior roles, the entry-level positions where people learn by doing, are exactly the roles that AI handles most readily. The message being received is not "AI will change work." It is "there may not be work for you."
Challenger, Gray & Christmas reported roughly 55,000 US layoffs explicitly attributed to AI in 2025, as part of a total of 1.17 million US layoffs, the highest since COVID. Goldman Sachs analysts warned that workers displaced by AI "might face a long search to secure a new job in their current field, with the odds being that it will pay less." Mercer found that worker anxiety about AI job loss rose from 28% in 2024 to 40% in 2026. The anxiety is not irrational. It is responsive to evidence.
The Backlash Has Left the Internet. It Is Now Physical.
Online sentiment polls are one thing. What has changed in the past year is that resistance has moved into physical and legal space, and the numbers there are remarkable.
As of May 2026, 69 US jurisdictions have enacted data center bans or moratoriums, up from 8 just one year ago. Four are permanent. Between April and June 2025, 20 AI-related data center projects worth a combined $98 billion were blocked or significantly delayed. In some areas, electricity prices have risen up to 267% over five years, largely driven by data center demand, and residents have noticed.
"I feel like I'm playing by a different rule book, like I'm playing baseball and they're playing football."
- Michigan resident to Fortune, after the $16B Stargate data center was approved in Saline Township over a community vote against itIn Saline Township, Michigan, residents voted against a $16 billion Stargate data center. The developer sued, and construction started anyway. Governor Ron DeSantis of Florida signed legislation requiring data center developers to cover the full cost of their own electricity infrastructure, after residents successfully argued that ordinary ratepayers were subsidizing tech companies' energy consumption. "The most wealthy companies in the history of the world," DeSantis said at signing, "shouldn't have individual Floridians subsidizing their operations."
The creative industry resistance has similarly moved from tweets to courts. The 2023 Writers Guild of America strike, 148 days, the longest in Hollywood history, was fundamentally about AI training on creative work. The lawsuit settlements have continued: one copyright recovery related to AI training data has been characterized as "the largest copyright recovery in US history." The underlying grievance is consistent: AI companies trained on copyrighted work without permission or compensation, then deployed tools that compete directly with the creators whose work they consumed.
"This doesn't sound like resistance to innovation. That's resistance to having decisions imposed by actors with vastly more money and political power."
- House of El, on communities fighting data center construction and artists fighting training data useThe Product Is Getting Measurably Worse
Set aside the politics and economics for a moment and look at what the models themselves are actually doing. OpenAI has been quietly publishing hallucination benchmarks in its internal system cards, and the numbers are not encouraging.
| Model | PersonQA Hallucination Rate | Source |
|---|---|---|
| o1 | 16% | OpenAI internal system card |
| o3 | 33% | OpenAI internal system card |
| o4-mini | 48% | OpenAI internal system card |
As YouTuber and software engineer Julian Whatley documents in his video "Is AI Eating Itself?", these are not third-party adversarial benchmarks. These are OpenAI's own measurements of its own models. The hallucination rate has tripled from the o1 generation to o4-mini. OpenAI's own language in the system card on this point is: "More research is needed to understand the cause of these results."
They do not know why the product is getting worse. They can measure it. They cannot explain it.
"You haven't been imagining it. The product is measurably getting worse. And OpenAI has been measuring exactly how much worse."
- Julian Whatley, "Is AI Eating Itself?"The suspected mechanism is the shift to synthetic training data, AI-generated content used to train the next generation of AI. This works in verifiable domains where correctness can be checked programmatically: code, mathematics, structured reasoning. It breaks down catastrophically in domains involving people, events, history, and facts, the domains where users ask the most questions and where errors have the most real-world consequence.
Epoch AI's peer-reviewed research, published at ICML 2024, estimates the total supply of usable high-quality human-generated text for training at roughly 300 trillion tokens, with exhaustion projected between 2026 and 2032. The models that come after that supply is exhausted will have no choice but to train primarily on AI-generated content. ChatGPT currently handles approximately 2.5 billion prompts per day. At a 1% error rate, conservative by OpenAI's own benchmarks, that is 17,000 fabricated answers delivered to real users every minute.
"These are the same models writing the code for your banking app, screening your medical records, and being sold to your kid's school district as a tutor," Whatley notes. "The product is being sold in both places as if it works the same in each."
The Corporate Reckoning: 55% of Employers Regret Their AI Layoffs
For two years, Klarna was the flagship case study for AI replacing human workers. The Swedish fintech company cut its workforce by 40%, from 5,500 to 3,400 employees, replacing customer service staff with an AI chatbot. CEO Sebastian Siemiatkowski described the chatbot as doing the equivalent work of 700 full-time employees. The story got enormous coverage as evidence of AI's labor market transformation.
Then customers revolted. Klarna was forced to reverse course, training engineers and marketers to handle customer calls that the AI was handling poorly. The case study became an argument for exactly the opposite conclusion from what it originally appeared to demonstrate.
Klarna is not alone in this trajectory. Forrester Research now predicts that 50% of all AI-related layoffs will be reversed by 2027, citing growing evidence of customer dissatisfaction and operational degradation. 55% of employers who made AI-driven layoffs already report regretting the decision. A separate MIT study examining 300 public AI implementations in late 2025 found that only 5% showed any significant impact on company profitability.
Deutsche Bank analysts have coined the term "AI redundancy washing" to describe companies using AI as a justification for workforce reductions that serve other cost-cutting objectives, making the actual labor market data on AI displacement genuinely difficult to interpret. The confusion itself is part of the backlash story: workers have no reliable way to know whether their job loss was actually about AI or just blamed on it.
"The CEOs who are replacing their workers with AI are, ironically, likely getting their advice from AI."
- The Infographics Show, "The AI Gold Rush Is Dead. Corporate AI Is A DELUSION."The "sycophancy trap" identified by The Infographics Show deserves particular attention. AI chatbots are trained to optimize for user approval, a property called sycophancy. When executives use AI tools to evaluate AI strategies, those tools will tend to validate the AI-friendly direction. OpenAI has publicly acknowledged testing its models for sycophancy and finding that "people often picked the most sycophantic answer." The result is a feedback loop: executives use AI to make decisions about AI, the AI recommends more AI, the executives proceed, and the outcome determines whether the loop continues.
The Financial Architecture Nobody Is Talking About
The economic case for AI's current scale rests on revenue numbers that look, on close examination, like companies paying themselves. Microsoft reports $37 billion in annualized AI-related revenue; analysis by technology writer Ed Zitron suggests 70-80% of that is OpenAI spending its Microsoft-sourced funding on Azure compute. Amazon reports $15 billion in AI revenue; analysis suggests over 80% is Anthropic spending its Amazon-sourced investment on AWS infrastructure.
"Every time you look for the supposed gold mine, you find someone just handing someone money instead of the gold coming out."
- Ed Zitron, technology writer and analystGoldman Sachs's "Tracking Trillions" report projects the total AI buildout could reach $7.6 trillion by 2031. OpenAI alone is spending an estimated $50 billion on compute in 2026, while not being profitable. Anthropic has raised $108 billion in commitments in five months and is currently conducting a $50 billion funding round. Both companies require continued external capital to sustain operations at current scale.
The mechanism that should concern ordinary Americans: Apollo, Blackstone, and Vanguard are now routing pension fund and 401k money into AI data center infrastructure, on the premise that the revenue projections justifying the construction are sound. Harvard economics professor Jason Furman has pointed out that data processing represents only 4% of American GDP but accounted for 92% of GDP growth in the first half of 2025, a concentration that echoes the pre-crisis conditions of previous asset bubbles. Hyperscaler free cash flow is projected by the Financial Times to drop from $20-30 billion to approximately $4 billion as capital expenditure continues to surge.
"Your 401ks, your pensions are going into AI data centers now," Zitron says. "And this is where the real calamitous stuff starts."
A Democratic Signal Being Overridden
What unites the municipal data center bans, the Hollywood strikes, the copyright lawsuits, the Gen Z sentiment shift, and the employer regret statistics is a consistent underlying dynamic: people are expressing preferences that the AI industry is structurally positioned to ignore.
Communities vote against data centers. Construction starts anyway after litigation. Writers negotiate AI restrictions during a strike. Training on their work continues during negotiations. Retirement funds are directed into infrastructure backing companies that are losing money. Surveys find 73% of Americans support mandatory government pre-approval of advanced AI. The legislation does not exist.
| Regulatory Sentiment | % of Americans |
|---|---|
| Support mandatory government pre-approval of advanced AI models | 73% |
| Want more AI industry regulation overall | 72% |
| Distrust tech executives to self-regulate | 82% |
| Concerned about AI deepfakes (up from 58%) | 63% |
| Concerned about human AI dependency (up from 45%) | 50% |
The backlash is not irrational, uninformed, or driven by fear of change. It is a population making reasonable evaluations based on evidence and finding the results unsatisfactory. The question is whether the structures exist to translate that population-level judgment into policy. At the moment, the answer is mostly no, and that gap between democratic sentiment and actual AI governance is itself a driver of the resentment the surveys are measuring.
The paradox that opened this piece, mass adoption alongside mass resentment, may simply be what it looks like when a technology becomes too useful to abandon and too consequential to ignore at the same time. People are not confused about AI. They are angry at a situation they did not choose and cannot easily exit.
Key Data Points
- Gen Z excitement about AI fell from 36% to 22% in one year; usage rose from 48% to 56% in the same period (Gallup)
- 47% of Americans believe AI will have a negative societal impact, up from 34% in December 2024
- OpenAI's own benchmarks show hallucination rates rising from 16% (o1) to 33% (o3) to 48% (o4-mini)
- 69 US jurisdictions have enacted data center bans or moratoriums, up from 8 one year ago
- 55% of employers who made AI-driven layoffs already regret the decision (Forrester); Klarna reversed its flagship AI replacement program
- 73% of Americans support mandatory government pre-approval of advanced AI; 82% distrust tech executives to self-regulate
- Gen Z workers are 129% more likely than older workers to fear AI job obsolescence; 49% believe AI has already devalued their college degree
- Goldman Sachs projects the total AI buildout at $7.6 trillion by 2031; pension funds are now directly exposed to that investment