When a security researcher cracked McKinsey's Lily platform for $20 in two hours using a SQL injection technique first documented in 1998, the obvious headline was about enterprise AI security. The less obvious story was what happened next. In the weeks that followed the breach, both Anthropic and OpenAI didn't rush to release new models. They stood up enterprise services divisions. They hired armies of what the industry now calls "forward-deployed engineers" and began embedding them directly inside corporate build-rooms alongside client developers. That is not a technology company move. That is a consulting company move.
The AI lab as professional services firm is no longer a metaphor. It is an operating model. And the financial architecture being erected around it is staggering enough to reframe every assumption about what the next phase of the industry actually looks like.
The Numbers Behind the Transformation
Consider the scale Anthropic is building toward. The company's annualized revenue is projected to exceed $45 billion, up from roughly $9 billion at the end of 2024. It is seeking to raise $50 billion at a pre-money valuation of approximately $900 billion — close enough to a trillion dollars that the distinction is mostly rhetorical. It has a $200 billion Google Cloud commitment over five years. And Claude Code, its developer-facing coding assistant, grew 80x in a single year.
These numbers are not organic growth projections from a research lab. They are the financial architecture of an enterprise technology company executing a services-led expansion strategy. The model — in the business sense, not the neural network sense — is unmistakable to anyone who has watched the professional services industry evolve over the past four decades.
The pattern is McKinsey's. Build intellectual credibility through demonstrated expertise. Embed people inside client organizations. Become operationally indispensable. Expand the engagement. The difference is that Anthropic's engagement expansion isn't measured in billable hours. It's measured in token throughput, API calls, and enterprise seat licenses that generate revenue at software margins once the deployment is live.
Deployment Is the Product
The Lily breach mattered not because of what it revealed about McKinsey's security posture, but because of what it revealed about the deployment gap. The platform had passed internal review. It had been in production for two years. It was used by 70% of the firm's 40,000 consultants. And it had 22 unauthenticated endpoints out of 200. The checklist said secure. The API surface said otherwise.
This is the deployment gap: the distance between a capable model and a functioning enterprise system. That distance is where the real work lives, and it is where the AI labs have decided to plant their flags. The response from both Anthropic and OpenAI after the Lily disclosure wasn't a model update. It was an organizational one — people, embedded in client environments, responsible for closing that gap.
"The signal is that the model was never the hard part."
— Nate B Jones, enterprise AI observer
What Jones is pointing at is the reorientation of competitive advantage in the industry. The assumption that dominated the first wave of AI investment — that whoever builds the best model wins everything — is being quietly abandoned by the labs themselves. The evidence is in their hiring. The evidence is in their partnership structures. And the evidence is in the kinds of problems their forward-deployed teams are actually solving inside client organizations, which have almost nothing to do with model architecture and almost everything to do with data pipelines, integration layers, change management, and governance frameworks.
The Infrastructure Behind the Consulting Playbook
The deployment pivot has an infrastructure dimension that is easy to underestimate. Anthropic's deal with SpaceX's Colossus 1 cluster — 220,000 Nvidia GPUs at an estimated cost of $5 to $6 billion per year — is the largest single compute procurement in the industry's history. The cluster was reportedly running at just 11% model flops utilization during training runs, which means the theoretical ceiling of what it can do hasn't been approached. That's either an engineering problem or a strategic reserve. Given where Anthropic appears to be headed, it is probably both.
The enterprise infrastructure market moved in concert. In a single week, SAP, Salesforce, ServiceNow, and Pinecone all shipped enterprise agent infrastructure updates — a coordinated market signal, even if uncoordinated in execution, that the platform layer for enterprise AI deployment is being built out at pace. The labs need that platform layer. The platform vendors need the labs' models. The enterprise clients need both to work together seamlessly. That is an ecosystem, and ecosystems have network effects that are extraordinarily difficult to disrupt once they stabilize.
"The implementation question isn't downstream of a strategic decision. It's effectively the strategic decision itself."
— Nate B Jones
Talent as Conviction
Perhaps the clearest signal of where the center of gravity in the industry has shifted is who is moving toward Anthropic and why. The company has become a destination for a specific kind of executive exodus — CTOs, cofounders, and senior engineers leaving positions of institutional power to join as Members of Technical Staff. The title is deliberately unglamorous. The motivation is anything but.
Henry Shi, who served as CTO of Super.com, spent nine months preparing before making the move to Anthropic MTS. His stated reasoning was wanting to be in "the front row" when AGI arrives. Shi predicts that threshold arrives somewhere between 2027 and 2028. He is not alone in that estimate. Anthropic co-founder Jack Clark has put a 60% probability on AI systems being able to build themselves without human intervention by 2028. If that estimate is even roughly correct, the people choosing to be inside Anthropic right now are not making a career move. They are making a historical bet.
This reverse brain drain — from positions of leadership at established companies toward the research frontier — is a form of revealed preference that market data alone cannot capture. When the people who could run companies choose to be individual contributors at an AI lab instead, it says something precise about where they believe consequential decisions are actually being made.
The Safety Paradox at Scale
None of this sits easily alongside the identity Anthropic has cultivated since its founding. The company was created, in part, as a reaction to what its founders saw as insufficient caution at OpenAI. Its safety research program — Constitutional AI, interpretability work, alignment investments — has been among the most substantive in the industry. And yet the company is now pursuing a valuation that would make it one of the most valuable entities ever created, deploying embedded engineering teams inside corporate clients, and operating compute infrastructure at a scale that dwarfs most national technology budgets.
The tension became explicit when the Pentagon moved to blacklist Anthropic following a dispute over safety guardrails on military AI applications. Anthropic sued. A federal judge temporarily blocked the blacklisting. The episode — a safety-focused AI lab in federal court defending its right to work with the military — illustrated precisely the kind of institutional contradiction that emerges when a mission-driven organization achieves the scale of a major enterprise services business.
Co-founder Daniela Amodei has addressed the tension directly: "Being in business doesn't have to be in tension with doing good." That framing is defensible and, one suspects, genuinely held. It is also the kind of framing that becomes harder to sustain as the business grows larger, the client relationships more complex, and the political environments more contested. McKinsey has been making the same argument for decades. The judgment of history on that argument is still being written.
What is not in dispute is that the AI lab as research institution is giving way to something more complicated, more consequential, and far more embedded in the structures of global commerce and governance than anyone fully anticipated when the first large language models became publicly accessible. The model was never the hard part. The hard part is everything that comes after — which, as it turns out, looks a great deal like consulting.