On April 30, 2026, nine jurors in Northern California will begin hearing arguments in a trial that could dismantle one of the most powerful AI companies in existence — all over a promise made in 2015 and a $38 million donation. Elon Musk is suing Sam Altman and OpenAI, alleging betrayal, deception, and a fundamental breach of the nonprofit mission they both once swore to uphold. The courtroom will become the unlikely stage for a reckoning not just about money or control, but about the soul of artificial intelligence.
Key Takeaways
- Elon Musk is seeking $134 billion in damages from OpenAI and Microsoft, with the payout directed to OpenAI’s nonprofit arm, not himself.
- The trial begins April 30, 2026, in Northern California, with a nine-person jury delivering an advisory verdict to guide the judge.
- Musk claims Altman and Brockman misled him into funding OpenAI by promising it would remain nonprofit, then pivoted to for-profit without informing him.
- Witnesses include Ilya Sutskever, Mira Murati, and Satya Nadella — and cringey texts and diary entries are expected to surface.
- A ruling could force OpenAI to revert to nonprofit status and remove Altman and Brockman from leadership.
The $38 Million Promise That Started It All
When Elon Musk helped cofound OpenAI in 2015, he didn’t just lend his name. He wrote a check for $38 million — one of the largest early contributions — on the explicit condition that the company remain a nonprofit dedicated to advancing AI for the public good. That wasn’t a side note. It was the foundation. Musk wasn’t interested in building another tech empire. He was trying to build a counterweight to Google, to prevent any single corporation from monopolizing AI. And he believed the only way to do that was by staying open, transparent, and free from shareholder pressure.
But by 2017, that vision was already cracking. Internal documents show that Sam Altman and Greg Brockman were already discussing the creation of a for-profit subsidiary. The reasoning? The AI race was accelerating, and a nonprofit couldn’t raise the capital needed to compete. Open-source was no longer safe. Secrecy was becoming a strategic necessity. The more powerful the models, the more dangerous it was to share them.
Musk, according to the court filings, was told none of this in full. When he threatened to cut funding unless the nonprofit structure was preserved, Altman and Brockman allegedly assured him they were committed to the original mission. Musk claims those assurances were false. He claims they were already planning the pivot while stringing him along. And now, four years after he left OpenAI in a public feud, he’s demanding accountability.
What’s at Stake Beyond the Money
The $134 billion figure isn’t random. It’s Musk’s estimate of the value he believes OpenAI has extracted from its broken promise — a number pulled from Microsoft’s $13 billion investment and OpenAI’s projected IPO valuation. But Musk isn’t asking for a dime for himself. He’s asking the court to award any damages to OpenAI’s nonprofit entity. That’s unusual. It suggests this isn’t just about revenge or money. It’s about legacy.
What Musk wants — really wants — is for OpenAI to be forced back into the nonprofit structure he believed he was funding. He wants Altman and Brockman removed from leadership. He wants the for-profit arm dissolved or restructured. And he wants the company’s original mission restored: open-source, transparent, and untethered from profit motives.
But here’s the catch: even if Musk proves Altman and Brockman misled him, he may not have standing to sue in the first place. OpenAI argues that Musk was never a formal board member, that he walked away in 2018, and that he even expressed interest in becoming CEO of the for-profit arm. They say he agreed to the pivot. They say the texts and emails will show it.
The Trial Will Air the AI Industry’s Dirty Laundry
This trial isn’t just about legal technicalities. It’s about power, ego, and the messy reality of building world-changing tech. Over the next several days, the public will hear from Ilya Sutskever, OpenAI’s former chief scientist, and Mira Murati, its former CTO. Satya Nadella will also take the stand — a rare appearance for the usually guarded Microsoft CEO.
And then there are the messages. Cringey texts, raw diary entries, private Slack logs — all expected to be entered into evidence. This is record. The AI industry runs on hype, carefully curated narratives, and tightly controlled leaks. But in this courtroom, none of that matters. The guardrails are gone. We’re going to see the scheming, the late-night arguments, the power plays. We’re going to hear how decisions were really made — not how they were later explained in blog posts.
OpenAI’s Defense: Necessity, Not Betrayal
OpenAI’s argument is straightforward: the world changed, and they had to change with it. In 2015, they believed open-sourcing AI was safe. By 2019, they didn’t. The risks of misuse — deepfakes, autonomous weapons, mass disinformation — became too great. And the cost of training models like GPT-4 and beyond was astronomical. A nonprofit couldn’t raise $10 billion from philanthropy. But Microsoft could.
The creation of the capped-profit subsidiary wasn’t a betrayal, they say. It was a survival mechanism. And Musk knew it. According to OpenAI, Musk didn’t just agree to the structure — he wanted to run the for-profit arm himself. When that didn’t happen, he left. The company says the shift was transparent, documented, and approved by the board.
They also point out the irony: Musk, who runs multiple for-profit companies (Tesla, SpaceX, X), is now suing to force a company back into nonprofit status. He didn’t try to stop the pivot in 2019. He didn’t file suit until 2025. Why now? OpenAI’s lawyers will likely argue this is less about principle and more about control — especially with OpenAI’s IPO looming.
The IPO That Could Vanish Overnight
OpenAI’s IPO was supposed to be one of the biggest tech debuts of the decade. Investors were lining up. Valuations were soaring. But this trial could derail it all. If the court rules that OpenAI violated its founding agreement, it could invalidate the for-profit structure entirely. That doesn’t just kill the IPO. It could force a complete restructuring — or even the dissolution of the for-profit entity.
Microsoft’s role adds another layer. The company has poured $13 billion into OpenAI and integrated its models deeply into Azure, Office, and GitHub. A ruling against OpenAI could put that entire investment at risk. It could also open Microsoft to further legal scrutiny — especially if the court finds that it pressured OpenAI to pivot for its own gain.
- OpenAI raised over $11 billion in private funding, mostly from Microsoft.
- The company’s projected IPO valuation was between $80B and $100B before the lawsuit.
- Musk’s $134B damages claim is based on Microsoft’s investment and OpenAI’s market potential.
- The for-profit arm was established in 2019, four years after Musk’s departure.
- The court has already found that Altman and Brockman wanted a for-profit model as early as 2017.
What This Means For You
If you’re a developer building on OpenAI’s models, this trial should worry you. A ruling against the current structure could mean sudden changes to API access, licensing, or even the availability of tools like GPT-4 and ChatGPT. If OpenAI is forced back into a nonprofit model, commercial use could be restricted. If the for-profit arm is dissolved, Microsoft may pull back integration, affecting Azure AI and Copilot.
For founders and builders, this is a cautionary tale about mission drift. You can start with noble intentions, but scale demands capital. And capital comes with strings. The moment you take outside money — especially from giants like Microsoft — your autonomy shrinks. This case shows how quickly a founding vision can become a legal liability. It also proves that in AI, the battle isn’t just technical. It’s ideological. And now, it’s in the hands of a jury.
Will the court decide that the promise of open, nonprofit AI was worth preserving — even if it meant losing the race? Or will it accept that in the real world, survival sometimes requires breaking promises?
Sources: MIT Tech Review, original report


