On April 29, 2026, nine jurors in a Northern California courtroom will begin hearing arguments in a trial that could dismantle one of the most powerful entities in artificial intelligence. The plaintiff: Elon Musk. The defendants: Sam Altman, Greg Brockman, and OpenAI itself. At stake is not just $134 billion in damages, but the very structure of the company Musk helped found in 2015 — and whether it can continue to operate as a for-profit enterprise.
Key Takeaways
- Elon Musk is suing OpenAI, Sam Altman, and Greg Brockman, alleging they deceived him about OpenAI’s shift from nonprofit to for-profit.
- He’s seeking $134 billion in damages — to be paid to OpenAI’s nonprofit, not himself — and wants Altman and Brockman removed from leadership.
- The trial begins April 29, 2026, in Northern California, with Musk, Altman, Brockman, Ilya Sutskever, Mira Murati, and Satya Nadella all expected to testify.
- Musk claims Altman and Brockman promised to keep OpenAI nonprofit in 2017, then secretly moved to create a for-profit arm.
- The court has already found that in 2017, Altman and Brockman wanted a for-profit entity, while Musk proposed merging OpenAI with Tesla.
The $134 Billion Betrayal Claim
Musk isn’t asking for a dime for himself. That’s the twist. Instead, he wants any damages awarded — up to $134 billion — funneled into OpenAI’s nonprofit arm. It’s a legal maneuver wrapped in moral framing: he’s not after personal gain, but restitution for a mission he believes was hijacked.
The number itself is staggering, but it’s rooted in Musk’s argument that OpenAI’s current valuation — inflated by proprietary models like GPT-5, closed-source practices, and a deep financial tie to Microsoft — would not exist without the original nonprofit foundation he helped fund. He donated $38 million in the early days, a sum now dwarfed by the company’s trajectory but central to his claim of trust violated.
He alleges that in 2017, when the idea of a for-profit subsidiary first surfaced, Altman and Brockman assured him they were committed to the nonprofit model. Internal communications — described in the source as “cringey texts” and “raw diary entries” — may soon make that promise public. If those messages show deliberate obfuscation, Musk’s case gains traction. If they show Musk himself pushing for a for-profit pivot — as OpenAI claims — then his standing collapses.
OpenAI’s Survival on Trial
This isn’t just a grudge match. The outcome could force OpenAI to unwind its corporate structure. Musk is asking the court to dissolve the for-profit subsidiary and restore OpenAI as a pure nonprofit — a move that would upend its partnerships, R&D funding, and product roadmap.
OpenAI’s defense hinges on a shifting landscape. The company argues that by 2017, it was clear the nonprofit model couldn’t compete with well-funded rivals like Google and Meta. Training frontier models demands billions in compute, infrastructure, and talent — costs a donation-based org can’t sustain. The pivot wasn’t betrayal, they say, but survival.
And they’ve got receipts: internal discussions, board minutes, and reportedly Musk’s own interest in becoming CEO of the proposed for-profit arm. That detail undercuts his narrative of being blindsided. If Musk wasn’t just on board with the idea but wanted to lead it, his claim of deception starts to look like sour grapes.
What the Court Has Already Found
Before the trial even began, the court confirmed a key fact: in 2017, Altman and Brockman wanted to establish a for-profit entity. Musk, meanwhile, pushed to merge OpenAI with Tesla, where it would operate under an existing corporate umbrella. That plan failed. When Musk threatened to cut funding, the court found, Altman and Brockman reaffirmed their commitment to the nonprofit structure — but continued planning the for-profit shift.
That timing is everything. Was it a lie or a pivot? Courts don’t decide based on ethics alone. They ask: was there fraud? Did Musk rely on false statements to his detriment? And does he even have standing to sue?
The Cast of Characters on the Stand
The trial will be a blockbuster not just for its stakes, but for its cast. Musk, Altman, and Brockman will all testify — three architects of modern AI, now testifying against each other. Former chief scientist Ilya Sutskever, who left OpenAI under murky circumstances in 2024, will likely be pressed on internal debates about mission drift. Mira Murati, former CTO, may reveal how engineering decisions were shaped by business pressures.
And then there’s Satya Nadella. Microsoft’s CEO isn’t a defendant, but his testimony could be explosive. Microsoft has poured over $13 billion into OpenAI since 2019, owns a reported 49% economic interest and 7% voting stake, and integrates its models across Azure, Office, and GitHub. If Nadella confirms that Microsoft assumed OpenAI was always meant to commercialize — or worse, that Musk approved it — Musk’s credibility takes a direct hit.
But if Nadella admits Microsoft knew the nonprofit charter was being bent, that could open a separate can of regulatory worms. Is a nonprofit allowed to operate a for-profit subsidiary so deeply tied to a single corporate partner? The IRS doesn’t love that setup.
The Documents That Could Break the Case
The source material promises “cringey texts” and “raw diary entries” will come to light. These aren’t legal boilerplate. They’re human artifacts — the kind that reveal tone, intent, and contradiction.
Imagine a 2017 text from Altman to Brockman: “Musk’s losing interest. Tell him we’re staying nonprofit. We’ll figure the rest later.” Or a diary entry from Musk: “Sam wants to sell out. I won’t let AI become another ad platform.” These aren’t just receipts — they’re narrative gold.
And because the jury will deliver an advisory verdict — a non-binding recommendation — the emotional weight of these documents could shape the judge’s final decision more than dry legal arguments.
The Bigger Picture: What’s at Stake Beyond OpenAI
This case isn’t just about one founder’s wounded pride. It’s a test of whether hybrid models in AI — those that straddle public mission and private capital — can survive legal scrutiny. OpenAI isn’t alone. Anthropic, founded by ex-OpenAI researchers, operates under a similar structure: a public benefit corporation with a “long-term stewardship” model designed to prevent hostile takeovers. It has raised over $7 billion from investors like Amazon and Google. If the court rules that OpenAI violated its nonprofit obligations, Anthropic’s model could face challenges from regulators, donors, or even its own shareholders.
The Allen Institute for AI, funded largely by the late Paul Allen’s estate, remains fully nonprofit. But it’s not building commercial-grade models. Its flagship LLMs are open-source and lack real-time deployment at scale. Meanwhile, Meta’s AI research arm operates under a corporate umbrella but releases models like Llama 3 openly. The contrast is stark: full openness without the nonprofit label, or restricted access with a public mission. OpenAI sits in the middle — and that middle ground may now be legally unstable.
Regulators in the EU and the U.S. are already probing how AI firms handle transparency, ownership, and public accountability. A ruling that OpenAI misled donors or violated charitable trust laws could trigger IRS audits, state attorney general investigations, or new legislation targeting “mission drift” in tech nonprofits. The timing couldn’t be worse — with the EU AI Act fully in force by 2026 and the U.S. pushing for AI safety standards through NIST and the FTC.
Technical and Governance Implications of a Nonprofit Mandate
Forcing OpenAI back into a pure nonprofit model would have immediate technical consequences. Frontier AI development today runs on massive computational scale. GPT-5, for example, is estimated to have required over 100,000 GPU-years of training compute, mostly sourced through Microsoft’s Azure infrastructure. That kind of investment doesn’t come from donations. It comes from revenue-sharing agreements, enterprise contracts, and equity stakes.
Without the for-profit arm, OpenAI would lose access to the $13 billion from Microsoft — funds used to build AI supercomputers, hire top-tier researchers, and license data. Even with $38 million from Musk and early donations, the nonprofit had burned through most of its cash by 2018. The pivot wasn’t theoretical. It was financial reality. Other nonprofits like EleutherAI and Hugging Face’s research division do significant work, but they train models orders of magnitude smaller than GPT-5.
There’s also the issue of talent. Top AI researchers command multimillion-dollar compensation packages — often including equity. A nonprofit can’t offer stock options. That puts it at a disadvantage against Google DeepMind, Meta AI, and even Anthropic, which uses long-term incentives to retain staff. After Ilya Sutskever’s departure in 2024, OpenAI lost a key technical leader. A forced restructuring could trigger more exits, especially if researchers fear instability or reduced resources.
And what about open access? Musk argues OpenAI betrayed its original mission by going closed-source. But open-sourcing GPT-5-level models poses serious safety risks. The company’s safety team, led by figures like Jan Leike until his 2024 resignation, has long warned that unrestricted release could enable disinformation, malware generation, and deepfake abuse. The tension between openness and safety is real — and no legal ruling will resolve it easily.
What This Means For You
If you’re building AI tools, this trial matters. A ruling against OpenAI could force a reevaluation of hybrid nonprofit-for-profit models across the industry. Companies like Anthropic and the Allen Institute may face renewed scrutiny over their own funding structures. Founders who promise “AI for good” while taking venture capital will have to defend that duality in court, not just in PR.
For developers, the stakes are practical. If OpenAI is forced to reopen its models or dissolve its for-profit arm, access to APIs, fine-tuning tools, and enterprise features could change overnight. Licensing, pricing, and data usage policies — all tied to the company’s legal standing — may reset. And if Musk wins, the precedent could chill private investment in AI research, pushing innovation back into academia or closed corporate labs.
What happens when the people who built the future start suing each other over who owns its soul?
Sources: MIT Tech Review, original report


