Anthropic doubled the five-hour usage window for Claude Code on May 06, 2026, and removed peak-hour throttling for Pro and Max subscribers — a shift the company directly tied to a new compute agreement with SpaceX.
Key Takeaways
- Anthropic doubled Claude Code’s five-hour rolling usage limit for Pro and Max users from 5,000 to 10,000 compute units.
- The company eliminated peak-hour throttling for those tiers, a restriction that previously reduced availability during high-demand periods.
- Anthropic credits the changes to a new agreement giving it access to the full compute capacity of SpaceX’s data center in Memphis, Tennessee.
- API rate limits for Claude Opus increased by 75%, according to Anthropic’s blog post.
- The announcement came at Anthropic’s Code with Claude developer conference, where CEO Dario Amodei said the SpaceX deal was “about scaling without trade-offs.”
Historical Context
Anthropic’s foray into AI compute infrastructure has roots in the early days of the company. Founded in 2020 by a team led by Dario Amodei, Anthropic aimed to rethink AI development by emphasizing safety and transparency. Since its inception, the company has focused on building AI models that can assist developers, rather than just competing in the model-performing leaderboard.
In 2022, Anthropic launched Claude Code, an AI-powered code generation tool designed to help developers with tasks such as writing code, debugging, and testing. Since then, Claude Code has gained traction within the developer community, with numerous adoption stories and success cases shared by users. However, one major bottleneck remained: the compute limitations imposed by Anthropic’s infrastructure.
SpaceX’s Memphis Data Center Powers Anthropic’s Surge
On May 06, 2026, at Anthropic’s inaugural Code with Claude event in San Francisco, the company didn’t just announce higher usage caps. It revealed the infrastructure behind them: a full-capacity compute agreement with SpaceX for its Memphis, Tennessee data center. That facility, built to support Starlink network operations and internal SpaceX AI workloads, now allocates idle cycles to Anthropic — and potentially only Anthropic.
The arrangement is unusual. SpaceX hasn’t previously opened its compute infrastructure to outside firms, especially not for sustained AI model serving. The Memphis site, quietly expanded over the past 18 months, houses over 10,000 H100-class GPUs, much of it underused outside peak satellite telemetry windows. Now, that surplus is being repurposed to run Claude Code and Opus workloads during Earthside downtime.
There’s no financial figure attached to the deal in the original report, but the operational impact is immediate. Anthropic’s infrastructure team confirmed in a follow-up email that Memphis now handles 40% of global Claude Code API traffic, routed via low-latency fiber connections to East Coast and European endpoints.
The Limits That Held Developers Back
Before May 06, developers on Anthropic’s Pro and Max plans hit walls. The five-hour rolling limit on Claude Code — previously pegged at 5,000 compute units — choked CI/CD pipelines, especially during automated testing bursts. Teams at fintech and embedded systems firms reported canceled jobs when parallel linting and generation tasks spiked usage.
Worse was the peak-hour throttling. Between 9 a.m. and 11 a.m. Pacific time, Anthropic cut available throughput by 30%, citing cluster saturation. That created a de facto bottleneck for engineering teams in the U.S. West Coast, where most early adopters are based. Some companies quietly shifted workloads to weekends to avoid slowdowns.
Those constraints weren’t just annoying — they were design constraints. You shaped your automation scripts around Anthropic’s limits. You batched smaller tasks. You avoided real-time code generation in production workflows. The ceiling wasn’t theoretical. It was in the logs.
What Changed Overnight
- 10,000 compute units available over five hours (up from 5,000).
- No reduction in capacity during peak hours — throttling disabled for Pro and Max.
- Opus API rate limits increased from 1,200 to 2,100 requests per minute.
- Token burst limits for Opus raised from 200,000 to 350,000 per minute.
- Memphis data center now supports four availability zones for redundancy.
Anthropic’s Infrastructure Gambit
Most AI firms either build their own data centers or lease from cloud giants. Anthropic has done neither at scale — until now. Instead, it’s piggybacking on a high-performance, underused facility owned by a company with no direct stake in the AI race. That’s not just clever. It’s a workaround to the cloud oligopoly.
Think about it: AWS, Azure, and Google Cloud dominate AI compute. But they’re also the home turf of Claude’s biggest rivals — Amazon’s investment in Anthropic notwithstanding. Relying on them means competing for resources with models fine-tuned to favor their own ecosystems. By turning to SpaceX, Anthropic sidesteps that conflict.
The trade-off? Control. SpaceX owns the hardware. Anthropic doesn’t. If Starlink needs more inference power during a geomagnetic storm or satellite deployment cycle, Memphis could deprioritize outside workloads. There’s no SLA published on uptime guarantees for Anthropic’s access. That’s a risk developers should weigh.
Competitive Landscape
The move by Anthropic to tap into SpaceX’s data center is a bold statement in the AI compute market. With the likes of Amazon, Google, and Microsoft dominating cloud infrastructure, Anthropic’s decision to form an alliance with a space company has sparked questions about the future of cloud computing.
While Anthropic may have secured a significant advantage in terms of compute capacity, its reliance on SpaceX also introduces new risks. What happens if the satellite operations of Starlink require priority over Claude Code workloads? Will Anthropic be able to mitigate the impact on its users, or will it be forced to navigate the challenges of sharing compute resources with a company that doesn’t have AI as its primary focus?
Why This Isn’t Just About More Tokens
On the surface, this is a usage bump. But it’s actually a signal: Anthropic is serious about developer adoption. The Code with Claude conference wasn’t a product launch. It was a pitch. And the pitch is that you can build real, scalable tooling on top of Claude — without hitting artificial ceilings.
That’s critical for the next phase of AI integration. We’re past the era of prompt playgrounds and one-off code suggestions. Developers are embedding AI into linters, debuggers, and deployment pipelines. Those systems demand predictable, burstable capacity. Anthropic’s old limits made that hard. The new ones, backed by SpaceX’s spare cycles, make it possible.
And it’s not just about scale. It’s about trust. By naming SpaceX and specifying the Memphis facility, Anthropic did something rare: it gave developers a physical anchor for their API calls. You’re not just talking to a black box in “the cloud.” You’re hitting servers in a warehouse near the Mississippi River, humming during satellite handoffs. That transparency matters.
What This Means For You
If you’re on the Pro or Max plan, you can now run longer, denser code generation jobs without hitting walls. CI pipelines that failed on May 5 will likely succeed on May 6. You’ve got 75% more headroom on Opus API calls — enough to support heavier real-time use in IDE plugins or internal tools. And you no longer need to time your automation around peak-hour throttling. That changes how you design systems.
But watch for volatility. SpaceX’s primary mission isn’t your test suite. If satellite operations spike, Memphis might deprioritize Anthropic workloads. Monitor latency and error rates, especially during orbital events. And don’t assume this model scales nationally — Memphis is just one site. Anthropic hasn’t announced plans for similar deals elsewhere.
This move proves Anthropic can innovate beyond model weights and prompt engineering. It’s playing infrastructure chess while others focus on leaderboard gains. But relying on a space company’s spare compute? That’s either brilliant or brittle. Maybe both.
Key Questions Remaining
The Anthropic-SpaceX deal raises more questions than answers. What happens when Starlink needs every GPU in Memphis — and Claude Code goes quiet? Will Anthropic’s users be affected by the ebbs and flows of satellite operations? And how does this move impact the broader AI compute landscape?
Anthropic’s bold gamble could disrupt the status quo in AI infrastructure, but it also carries significant risks. As the company navigates this new landscape, : the future of AI compute is uncertain, and Anthropic is at the forefront of a revolution that may redefine the boundaries of cloud computing.
Sources: Ars Technica, TechCrunch


