Elon Musk launched xAI to compete directly with OpenAI, Google DeepMind, and Anthropic in the race toward advanced artificial intelligence.
But increasingly, xAI may be evolving into something else entirely: a neocloud infrastructure company built around selling AI compute at massive scale.
That is the argument emerging after xAI’s surprising partnership with Anthropic, where Anthropic reportedly leased the full compute capacity of xAI’s Colossus 1 data center. The deal suggests xAI’s real strategic value may not only come from building AI models like Grok, but from becoming a provider of AI infrastructure itself.
The term “neocloud” generally refers to newer AI-focused cloud infrastructure providers that specialize in GPU compute rather than traditional enterprise cloud services.
Unlike AWS, Microsoft Azure, or Google Cloud, neocloud companies focus almost entirely on supplying:
Companies like CoreWeave, Lambda, Crusoe, and Fluidstack are often grouped into this category because they built businesses around renting AI compute to model developers.
Now, xAI increasingly appears to be operating similarly.
The biggest trigger for this discussion was xAI’s deal with Anthropic.
According to reports referenced in TechCrunch’s analysis, Anthropic leased the entire compute capacity of xAI’s Colossus 1 facility, totaling roughly 300 megawatts of compute infrastructure.
That arrangement is unusual.
Most frontier AI companies like Google, Meta, and OpenAI typically reserve their compute infrastructure primarily for internal model development. They treat GPU access as a strategic asset that should remain tightly controlled.
xAI, however, appears increasingly willing to monetize excess compute externally.
That is much closer to the behavior of a cloud infrastructure provider than a traditional AI lab.
xAI’s Colossus supercomputer has rapidly become one of the most talked-about AI infrastructure projects in the industry.
Located in Memphis, the facility was reportedly assembled at extraordinary speed using massive Nvidia GPU clusters to power Grok and other xAI systems. But the scale of the infrastructure also creates another opportunity: renting compute to other AI companies.
That matters because AI compute has become one of the most valuable commodities in technology.
Demand for GPU infrastructure is exploding due to:
This creates an environment where infrastructure itself may become more valuable than individual AI applications.
The comparison many analysts are making is to CoreWeave.
CoreWeave started as a GPU infrastructure provider and rapidly became one of the biggest winners of the AI boom by renting Nvidia compute capacity to companies building AI systems.
Its valuation exploded because AI labs increasingly needed infrastructure immediately rather than waiting years to build data centers themselves.
xAI may now be pursuing a hybrid version of that model:
That approach could help offset the massive costs associated with building hyperscale AI data centers.
One reason this strategy fits Musk is that he has repeatedly emphasized the importance of controlling core infrastructure layers.
Across his companies:
Musk has also spoken publicly about AI becoming fundamentally constrained by energy supply and compute access.
That explains why xAI has aggressively expanded data center construction and GPU acquisition over the past year.
In this model, owning the infrastructure layer may become more strategically important than owning the best chatbot.
Another reason this shift matters is that AI model development itself is becoming extraordinarily expensive.
Training frontier systems now requires:
Only a handful of companies can realistically afford to compete at that scale.
Infrastructure providers, however, may benefit regardless of which AI lab ultimately wins the model race.
That is why investors are pouring money into:
The AI economy increasingly resembles a modern industrial buildout rather than a pure software market.
What makes xAI unusual is that it now appears to operate simultaneously as:
That blending of categories is becoming increasingly common across AI companies.
OpenAI is building enterprise infrastructure. Anthropic is launching deployment ventures with Wall Street firms. Google integrates AI across cloud and consumer services simultaneously.
But xAI’s infrastructure pivot appears especially aggressive.
One of the clearest themes emerging across the AI industry is that compute access itself may become the ultimate competitive advantage.
Models can improve quickly. Interfaces can be copied. Features spread fast.
But large-scale AI infrastructure:
Which is why xAI’s willingness to operate like a neocloud company may actually be less surprising than it first appears.
The AI companies with the most compute may ultimately control the future of the industry, regardless of who has the best chatbot today.
Discussion