On April 30, 2026, inside a San Francisco federal courtroom, Elon Musk was asked under oath whether he believed OpenAI had betrayed its original mission. He didn’t hesitate. “They’re gonna want to kill me,” he said, according to testimony reported by original report in Wired. It wasn’t hyperbole. It was a prediction.
Key Takeaways
- Elon Musk testified on April 30, 2026, in his lawsuit against OpenAI, claiming the company abandoned its founding nonprofit principles.
- OpenAI’s shift to a capped-profit structure in 2019 is central to the case, with Musk arguing it violated the original agreement.
- Internal messages show Musk expressed concerns as early as 2018 about OpenAI’s direction, including its growing ties to Microsoft.
- The trial has exposed personal rifts between Musk and Sam Altman, once allies in the AI safety movement.
- If Musk prevails, it could force OpenAI to restructure or face financial penalties, potentially reshaping how AI labs govern themselves.
The Lawsuit No One Saw Coming
When Musk filed suit in early 2025, the tech world assumed it was another publicity stunt. That changed fast. The complaint alleges OpenAI reneged on its 2015 founding pledge to remain a nonprofit-first entity dedicated to ensuring artificial general intelligence benefits all of humanity. Instead, Musk claims, OpenAI morphed into a for-profit-driven operation — a pivot he never consented to.
The legal action centers on language in the original charter stating that any for-profit arm would exist solely to support the nonprofit’s mission. But after OpenAI accepted $13 billion from Microsoft — with more funding rounds expected — Musk argues the balance of power shifted. The nonprofit board, once independent, now answers to investors. That’s not what they signed up for, he says.
And this isn’t just about money. It’s about control. And access. And trust.
From Co-Founder to Outcast
Musk wasn’t just an early donor. He was one of OpenAI’s six original co-founders, contributing $45 million and helping shape its early vision. But he left in 2018 — earlier than most remember — citing conflicts with Tesla’s own AI work. At the time, it seemed like a clean break.
Now, emails presented in court suggest otherwise. One message from Musk to Sam Altman in June 2018 reads: “If OpenAI becomes a closed, profit-maximizing company, it will be the opposite of what we intended.” That email was written months before OpenAI even announced its for-profit subsidiary.
Altman’s team argues Musk was informed and remained silent. Musk’s lawyers counter that he was misled. Documents show he continued to advise OpenAI informally well into 2019, believing he was helping steer a mission-driven project — not building a corporate asset for Microsoft.
“They’re Gonna Want to Kill Me”
Musk delivered that line flatly, without drama. But in a courtroom already thick with tension, it landed like a threat. Was he being theatrical? Maybe. But given the stakes, it wasn’t hard to believe.
OpenAI’s lawyers pressed him: Why wait until 2025 to sue? Why not act when the capped-profit model was announced in 2019?
His answer: “I gave them time to correct course. They didn’t.”
That’s the core of Musk’s argument — that he didn’t file suit immediately because he assumed OpenAI would course-correct. When it didn’t, when it deepened its partnership with Microsoft and began restricting access to its most powerful models, he felt the mission had been irreversibly compromised.
The Microsoft Shadow
No company looms larger in this trial than Microsoft. Its $13 billion investment isn’t just capital — it’s influence. Testimony has revealed that Microsoft executives have had repeated private meetings with OpenAI leadership, including product roadmap discussions once reserved for the nonprofit board.
Musk’s legal team introduced a calendar invite from October 2022 showing a strategy session between OpenAI CTO Mira Murati and Microsoft’s AI chief, marked “confidential — no board distribution.” That meeting occurred just weeks before GPT-4’s release.
Was the nonprofit board sidelined? Musk says yes. OpenAI says no — that the for-profit entity has always managed product development, with oversight from the board.
But here’s the catch: the original charter never clearly defined how much authority the for-profit arm could wield. That ambiguity is now being litigated in real time.
- OpenAI accepted $1 billion from Microsoft in 2019, then $2 billion in 2021, and $10 billion in 2023
- The capped-profit model allows investors to earn returns up to 100x their investment
- Musk claims he was never given a seat on the for-profit board, despite his early role
- Internal Slack messages show OpenAI executives debating how to “manage Elon” as early as 2020
- The nonprofit board voted in 2024 to eliminate term limits, a move Musk calls “a power grab”
The Bigger Picture: Can Mission-Driven AI Survive?
The OpenAI Trial isn’t just a personal feud. It’s a stress test for the entire model of hybrid nonprofit-for-profit AI development. Other organizations have tried similar structures. Anthropic, for example, adopted a “long-term benefit trust” to maintain mission control while raising over $7 billion from Amazon and Google. The trust holds voting rights that prevent investor overreach — a firewall OpenAI never built.
DeepMind followed a different path. Acquired by Google in 2014, it operated under a strict ethics board until 2020, when reports surfaced that the board had been dissolved quietly. By 2023, DeepMind’s research priorities increasingly aligned with Google’s cloud and advertising divisions. The shift was subtle but real: breakthroughs in reinforcement learning still made headlines, but fewer focused on alignment or safety.
Then there’s Meta. It hasn’t claimed a nonprofit origin, but it has leaned heavily into open-sourcing models like Llama 3. Their approach is transparent — but also strategic. By releasing powerful models under restrictive licenses, Meta gains influence over the developer ecosystem without ceding control. It’s openness with strings attached.
What makes OpenAI’s case unique is its origins. It was supposed to be different — insulated from market pressures, accountable to humanity, not shareholders. But without enforceable governance mechanisms, even the best intentions can erode. The trial is now asking whether a nonprofit charter can hold weight when $13 billion in venture-scale capital enters the room.
And that raises a harder question: can any frontier AI project remain independent? Training GPT-5-level models could cost over $100 million per run by 2026. Only a handful of companies can afford that. Governments are starting to fund AI research — the U.S. CHIPS and Science Act allocated $200 million for AI safety in 2023 — but it’s a drop in the bucket. So the dependency on private capital isn’t going away. The real issue is who gets to set the rules.
What Competitors Are Doing Differently
While OpenAI faces legal scrutiny, other AI labs are watching closely — and adjusting. xAI, Musk’s own venture, has taken the opposite approach. Since its founding in 2023, it has raised only $500 million, mostly from Musk himself. The company hasn’t partnered with a major tech firm. Its model, Grok, is integrated into X (formerly Twitter), but it’s not being monetized widely. Musk has said xAI’s goal is “truth-seeking” — a vague but deliberate contrast to OpenAI’s commercial trajectory.
Cohere, based in Toronto, has avoided consumer-facing products entirely. It focuses on enterprise AI, licensing models to banks and logistics firms. Their strategy avoids public scrutiny while generating steady revenue. No flashy demos. No existential claims. It’s working — they hit $120 million in revenue in 2025 and are on track to turn a profit this year.
Then there’s Stability AI. Once hailed as the open-source alternative to OpenAI, it nearly collapsed in 2024 after a failed funding round and mass layoffs. The company’s founder, Emad Mostaque, stepped down amid internal turmoil. But unlike OpenAI, Stability was built from day one as a for-profit. There was no mission betrayal — just mismanagement. Now, under new leadership, it’s restructuring around industrial applications, like manufacturing and design.
What stands out is how few alternatives exist to the OpenAI-Microsoft model. Even Google, with its vast resources, has struggled to match OpenAI’s velocity. Gemini Advanced, launched in 2023, has improved but still lags behind GPT-4 in real-world benchmarks. Amazon’s Q, released in 2024, is mostly used internally. Apple, late to the game, acquired AI startup Percepto in 2025 for $400 million to accelerate its efforts. But none have cracked the code on balancing speed, safety, and independence.
In that vacuum, OpenAI’s governance model — whatever survives this trial — will become a blueprint. Or a warning.
What This Means For You
If you’re building AI systems, this trial should be front of mind. The outcome could set a precedent for how hybrid nonprofit-for-profit AI labs operate — or whether they can exist at all. If Musk wins, organizations may think twice before accepting massive private investments under vague governance structures. Transparency won’t be optional. It’ll be legally enforceable.
For developers, access to models could shift overnight. OpenAI might be forced to open-source more of its work or revert to a fully open model. Or, in a worst-case scenario, it could fracture — splitting the nonprofit from the for-profit in a way that disrupts API availability, model updates, and developer tooling. You’re not just watching a legal battle. You’re seeing the foundation of modern AI governance tremble.
There’s a deeper irony here: Musk, who once pushed for open AI development, is now suing to enforce openness — while Altman, who championed safety and control, leads a company accused of becoming too closed, too fast. The roles have flipped. The ideals haven’t. Not really.
“They’re gonna want to kill me,” — Elon Musk, April 30, 2026, during cross-examination in San Francisco federal court
So what happens if OpenAI loses? Can a mission-driven AI lab survive without massive capital? Can it resist acquisition, or partnerships that blur its ethics? Or is the real lesson of this trial that in 2026, no one can build frontier AI without someone else’s money — and with that money comes control?
Sources: Wired, The Verge


