Elon Musk’s AI Downfall
In a federal courtroom in Oakland, California, on May 01, 2026, Elon Musk sat in a crisp black suit, hands folded, as he testified under oath that he was a fool.
Key Takeaways
- Elon Musk admitted under oath that xAI distills OpenAI’s models to train its own AI, including Grok.
- He claims he donated $38 million to OpenAI under the belief it would remain a nonprofit.
- Musk argues OpenAI’s shift to a for-profit structure violated its founding mission.
- OpenAI’s lawyers countered that Musk never intended for the company to stay nonprofit.
- The trial could block OpenAI’s $1 trillion IPO and reshape the AI industry’s governance.
The Hollow Core of the AI Arms Race
What happened in Oakland wasn’t just a contract dispute. It was a public autopsy of the myth that AI development is guided by principle. Musk, who once called for a pause on AI training runs, now admits his own company, xAI, relies on OpenAI’s models to build Grok. That’s not competition. That’s recycling.
And not quietly, either. The admission came with audible gasps in the courtroom. Lawyers paused. Reporters stopped typing. Even the protesters outside—some holding signs that read “Quit ChatGPT,” others “Boycott Tesla”—seemed to feel the shift through the walls.
Musk framed the lawsuit as a crusade to save OpenAI’s soul. But the irony is too thick to ignore: he’s suing OpenAI for betraying its nonprofit roots while running an AI company that piggybacks on its tech. If this is guardianship, it’s armed with contradictions.
“I Was a Fool” — And Why It Matters
That’s what Musk told the jury. Not “I was misled.” Not “I was misled intentionally.” I was a fool. The South African accent curled around the words like smoke. He said it plainly: “I gave them $38 million of essentially free funding, which they then used to create what would become an $800 billion company.”
The number is critical. $38 million. Not an investment. A donation. That’s the foundation of his claim—that OpenAI was supposed to be a public good, not a vehicle for personal wealth. He says he cofounded it in 2015 with Sam Altman and Greg Brockman to counter Google’s AI dominance, not to seed a trillion-dollar private empire.
But here’s what Musk didn’t say: when he left OpenAI’s board in 2018, he signed a resignation letter that waived any ownership claims. He didn’t sue then. He didn’t speak out. He waited until OpenAI was on the verge of a $1 trillion IPO—and until his own AI venture, xAI, was prepping to go public as part of SpaceX, targeting a $1.75 trillion valuation by June 2026.
The Timing Isn’t Coincidental. It’s Calculated.
William Savitt, OpenAI’s lawyer and a former Tesla counsel, didn’t hold back. He argued Musk was never committed to OpenAI being a nonprofit. He pointed to emails, board meetings, and Musk’s own past statements suggesting that scale required capital—that true AI safety might need for-profit muscle.
Savitt’s tone was surgical. He didn’t shout. He didn’t need to. He laid out a timeline: Musk pushed for faster scaling, lobbied for more control, then left when he didn’t get it. Now he wants back in—on his terms.
This isn’t the first time Musk has been accused of playing a double game. In 2020, when xAI was still in its infancy, Musk tweeted that ChatGPT-3 was “worth more than all of Tesla’s stock.” Yet, in the same breath, he criticized the model for lacking common sense. It was a classic case of a conflicted message: Musk wanted to both promote xAI’s potential and downplay AI’s risks.
Fast-forward to the present, and the contradictions have only multiplied. In January 2026, xAI filed a lawsuit against the state of Colorado, challenging the state’s new AI regulations. The suit claimed that the regulations would stifle innovation and harm the company’s ability to develop its AI models. It was a bold move, especially given the state’s efforts to create a more transparent and accountable AI industry.
The Safety Hypocrisy
Musk painted himself as AI’s reluctant prophet—one who warned Larry Page that AI could wipe out humanity. “That will be fine as long as artificial intelligence survives,” Page allegedly replied. Musk called that the origin of his alarm.
And yet.
xAI sued the state of Colorado in April 2026 over AI regulations. Not supported them. Sued. The case challenged a state law requiring transparency in AI training data and risk assessments for high-impact models. xAI argued it would stifle innovation. The lawsuit is ongoing.
So Musk warns of a “Terminator situation” in court. But when a state tries to enforce safeguards, his company fights it. That’s not stewardship. That’s theater.
The dichotomy is particularly striking when compared to his words on AI safety. In a 2020 interview with TIME Magazine, Musk emphasized the need for AI developers to prioritize safety above all else. He warned that the lack of accountability and oversight in the AI community was a ticking time bomb, waiting to unleash chaos on the world.
But in the courtroom, Musk’s message is different. Instead of advocating for greater accountability and transparency, he’s pushing for a more permissive regulatory environment. It’s a move that has left many in the tech community scratching their heads, wondering which version of Musk is the real one.
Who Owns the Mission?
- OpenAI was founded as a nonprofit with a capped-profit subsidiary.
- In 2019, it restructured, allowing investors to claim returns beyond the cap.
- Musk claims this violated the original agreement.
- OpenAI says the change was necessary to attract the capital needed for AGI.
- Musk is asking the court to remove Altman and Brockman and restore the nonprofit structure.
The legal question is narrow. The implications are massive. If the court agrees with Musk, OpenAI could be forced to unwind nearly a decade of corporate evolution. Its IPO? Gone. Its investor deals? Nullified. Altman? Out.
But what about xAI? If OpenAI’s for-profit shift was illegitimate, what does that say about a company that’s not just for-profit but publicly litigating against oversight?
Distillation: The Quiet Theft No One Talks About
Here’s the most underreported detail from the trial: Musk admitted that xAI distills OpenAI’s models to train its own. That means xAI feeds OpenAI’s outputs—say, responses from GPT-4 or GPT-4.5—into its training pipeline to teach Grok how to behave similarly.
It’s not illegal. It’s not even uncommon. But it’s rarely acknowledged so bluntly in court.
Distillation is efficient. It’s also a shortcut. You don’t need the same compute. You don’t need the same data. You just need access to the output—and the willingness to copy it. And Musk, while accusing OpenAI of betrayal, is doing the same thing—just in public.
There’s no irony meter for tech billionaires, but if there were, it would’ve flatlined that day.
The AI Industry’s Governance Crisis
The trial has highlighted a deeper issue in the AI industry: its governance crisis. As AI becomes increasingly pervasive, the industry is struggling to create a framework that balances innovation with accountability.
On one hand, the industry needs to encourage innovation and experimentation. This means creating a permissive regulatory environment that allows companies to test the limits of AI technology.
On the other hand, the industry needs to ensure that AI is developed responsibly and with transparency. This means implementing strong oversight mechanisms that prevent companies from exploiting AI for their own gain.
The problem is that the industry is still grappling with how to strike this balance. The trial is a manifestation of this crisis, with Musk pushing for greater accountability and OpenAI pushing for greater innovation.
What This Means For You
If you’re building AI models, this trial should keep you up at night. The outcome could redefine who owns AI advancements when mission statements collide with market forces. If Musk wins, it sets a precedent that founders can sue to enforce ideological purity—even years after stepping away. That’s dangerous for innovation. But if he loses, it confirms that once a nonprofit opens the for-profit door, there’s no closing it.
And if you’re using OpenAI’s APIs or public models: assume your work can and will be distilled. Grok is doing it. Others will follow. Your differentiator isn’t just performance. It’s opacity. Obfuscation. Legal protection. The open era of AI is being eaten by the closed.
What happens when the most vocal critic of AI recklessness is also one of its most aggressive opportunists? The courtroom in Oakland didn’t answer that. But it made the question impossible to ignore.
Sources: MIT Tech Review, original report


