Key Takeaways
- OpenAI CEO Sam Altman and president Greg Brockman testified in the landmark trial against Elon Musk, refuting Musk’s allegations of deceit.
- Musk claims OpenAI deceived him into donating $38 million, which the company allegedly used for a for-profit arm.
- However, Brockman revealed that Musk had actually pushed for OpenAI to create a for-profit arm and wanted “absolute control” over it.
- Shivon Zilis, a former OpenAI board member and mother of four of Musk’s children, testified that Musk tried to recruit Altman to lead a new AI lab at Tesla.
- The trial could upend OpenAI’s race toward an IPO at a valuation approaching $1 trillion.
Musk’s Motives Under Scrutiny
As the trial enters its second week, Musk’s motives for suing OpenAI are under intense scrutiny. Elon Musk, the billionaire entrepreneur and xAI founder, claims that OpenAI CEO Sam Altman and president Greg Brockman deceived him into donating $38 million to the company. Musk alleges that OpenAI promised to maintain the donation as a nonprofit dedicated to developing AI for the benefit of humanity, only to later accept billions of dollars of investment from Microsoft and restructure the company to operate a for-profit subsidiary.
That shift, according to Musk, violated the original agreement among the cofounders. He argues that the nonprofit charter was meant to be permanent, not a temporary vehicle for fundraising. The breach, he claims, wasn’t just financial—it was ethical. OpenAI, he says, turned its back on its founding principles. But internal communications and testimony presented so far suggest a more complicated picture—one where Musk wasn’t just a passive donor but an active force pushing the company toward commercialization.
The trial has drawn attention not just for the personalities involved, but for what it reveals about the fragility of mission-driven startups when they scale. OpenAI began as a collective of researchers alarmed by the unchecked power of tech giants. Now, it’s sitting on a valuation that could top $1 trillion. That kind of growth doesn’t happen without compromise. The question is whether those compromises cross a legal or ethical line.
Musk’s History with OpenAI
Musk cofounded OpenAI in 2015 with Altman, Brockman, and others, but left the company in 2018. Since then, he’s been a vocal critic of OpenAI’s direction, accusing the company of prioritizing profits over its original mission. Musk has also founded his own AI company, xAI, which is now a division of his rocket company, SpaceX.
The 2015 launch was a direct response to concerns about Google DeepMind and the centralization of AI power in corporate labs. The original OpenAI charter emphasized openness, safety, and broad distribution of benefits. At the time, Musk was vocal about the existential risks of AI and the need for an independent research body. He donated $10 million at incorporation and promised more—eventually totaling $38 million in pledged contributions, though not all was delivered before his exit.
His departure in 2018 wasn’t sudden. Tensions had been building. Musk wanted deeper integration with Tesla, particularly around autonomous driving, but Altman and Brockman resisted. They worried about conflicts of interest and the risk of OpenAI becoming an extension of Musk’s empire. Internal emails show Musk repeatedly suggesting that OpenAI pivot to a product-first model, with Tesla as its primary customer.
By mid-2018, Musk’s influence had waned. He was spending more time on Tesla and SpaceX, and his demands for control clashed with the leadership’s desire to maintain independence. When he proposed merging OpenAI with Tesla, the board rejected it. Days later, Musk announced he was stepping down. At the time, he said it was due to time constraints. Now, his lawsuit suggests it was because he felt sidelined.
Brockman’s Testimony
Greg Brockman took the stand on Monday, walking into the courtroom in a blue suit and tie, holding hands with his wife, Anna Brockman. Brockman was serene, even chipper, as he recalled OpenAI’s early days. However, he grew agitated under impassioned questioning from Elon Musk’s lawyer, Steven Molo.
“I was always concerned about the company’s financial situation,” Brockman said. “We were burning through cash, and I was worried that we wouldn’t be able to make payroll.”
Brockman detailed the precarious early years. In 2017 and 2018, OpenAI burned through $20 million annually with no clear path to revenue. The team had built impressive models, but without infrastructure to deploy them at scale, commercial viability was out of reach. Brockman said Musk was aware of the financial strain and had urged the team to “move faster” on monetization.
He presented a series of text messages from 2017 in which Musk wrote: “We need a product. A real one. Not research demos.” In another exchange, Musk pushed for a for-profit subsidiary, arguing it would attract top talent and allow for faster iteration. “Without equity incentives, we lose to Google,” he wrote. Brockman said the idea was discussed but not acted on at the time.
The most explosive moment came when Brockman revealed that Musk had messaged him two days before the trial began, asking if he would be interested in settling. When Brockman suggested that both sides drop their claims, Musk texted back: “By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be.”
The message, displayed on a courtroom screen, drew audible gasps. It’s now part of the public record and has been cited by legal analysts as evidence of intent—not just to litigate, but to damage reputations. Molo argued the message was taken out of context, but the judge allowed it to stand.
Shivon Zilis’ Testimony
Shivon Zilis, a former OpenAI board member and the mother of four of Musk’s children, testified that Musk tried to recruit Altman to lead a new AI lab at Tesla. Zilis revealed that Musk had approached her with the idea, and that she had seen Altman’s interest in the proposal.
She described a meeting in late 2017 at Musk’s home in Bel Air, where he laid out a vision for an AI division within Tesla focused on real-time neural networks for autonomous vehicles. Musk wanted Altman to run it, offering him a significant equity stake and full autonomy over research direction. Zilis said Altman was intrigued but hesitant. “He asked if it would conflict with OpenAI’s mission,” she recalled. “I told him Elon said it wouldn’t.”
Zilis also testified that Musk had expressed frustration with OpenAI’s pace. “He said they were moving like a nonprofit,” she said, “but the race is won by for-profits.” She said Musk viewed the nonprofit structure as a liability, not a virtue. Her testimony supports Brockman’s account that Musk wasn’t just open to commercialization—he was pushing for it.
Her role in the case has drawn attention beyond the legal details. As a former board member and Musk’s partner, her credibility is central to the narrative. Her decision to testify against Musk—despite their personal ties—suggests a deep rift or, alternatively, a calculated move to protect her own standing in the AI community. She still holds advisory roles at several AI startups and has been active in AI ethics discussions.
The Competitive Landscape
This trial isn’t playing out in a vacuum. It’s unfolding at a time when the AI industry is undergoing rapid consolidation. OpenAI, despite its origins, is now in direct competition with Google DeepMind, Anthropic, and Musk’s own xAI. The race isn’t just about models—it’s about trust, talent, and access to capital.
Microsoft’s $13 billion investment in OpenAI has given the company a massive edge in compute resources. That funding enabled the development of GPT-4, the training of massive multimodal models, and the rollout of enterprise tools like Copilot. But it also tied OpenAI’s future to a commercial partner—one with its own strategic interests.
xAI, founded in 2023, is still in earlier stages. It has raised $6 billion, mostly from Musk and his allies, and has launched Grok, an AI assistant integrated into X (formerly Twitter). But it lacks the infrastructure and user base of OpenAI’s offerings. Legal analysts say Musk’s lawsuit could be as much about slowing OpenAI down as it is about seeking damages. A prolonged legal battle could delay the IPO, scare off partners, and give xAI time to catch up.
Anthropic, backed by Amazon and Google, is another player watching closely. Its leadership, including former OpenAI researchers, has positioned itself as more aligned with safety norms. If OpenAI is found to have misled donors or violated its charter, it could strengthen Anthropic’s claim to being the “ethical” alternative.
The trial also highlights the blurred lines between personal ambition and corporate strategy in Silicon Valley. Musk, Altman, and Brockman all moved in the same circles before the split. Now, they’re on opposite sides of a legal war that could reshape how AI companies are governed, funded, and held accountable.
What This Means For You
The outcome of the trial could have significant implications for the AI industry, as OpenAI’s IPO plans are put on hold. If Musk’s allegations are proven, it could set a precedent for companies to prioritize transparency and accountability in their dealings with investors and partners.
Developers and builders should be aware of the potential risks and challenges associated with working with AI companies, especially those that are restructured or have complex financial arrangements. It’s essential to conduct thorough due diligence and ensure that your interests are aligned with those of your partners.
Consider a startup founder who licenses OpenAI’s API to build a healthcare chatbot. If OpenAI’s legal status changes—if it’s forced to restructure or return funds—the startup could lose access to core infrastructure overnight. Contracts based on stability assumptions could collapse. That’s not just a technical risk; it’s a business survival risk.
Another scenario: a researcher joining an AI lab backed by a billionaire founder. The lab promises academic freedom but operates under a nonprofit banner while raising venture capital. The OpenAI case shows how quickly governance models can shift. Employees could find themselves working for a for-profit entity despite joining a mission-driven project.
Third, investors in AI startups should pay close attention to charter terms and founder agreements. Vague language about “benefiting humanity” may not hold up in court. The Musk-OpenAI dispute proves that early-stage commitments can become liabilities if they’re not legally airtight.
The trial also raises questions about the accountability of AI companies and their leadership. As the industry continues to grow and evolve, it’s essential to establish clear guidelines and regulations to ensure that companies prioritize their mission and values over profits.
What Happens Next
Both sides are expected to call more witnesses in the coming weeks. Sam Altman is scheduled to testify, and legal analysts expect Musk to take the stand himself. The judge has indicated that discovery will remain open for several months, meaning new documents could surface even after the trial concludes.
If Musk wins, OpenAI could face financial penalties, governance changes, or even a forced restructuring of its for-profit arm. That might delay or derail the IPO, which was expected within 18 months. A loss for Musk could weaken xAI’s position and expose him to counterclaims about witness intimidation, given the text message to Brockman.
Regardless of the verdict, the trial has already changed the conversation around AI governance. Founders can no longer assume that goodwill or mission statements will protect them from legal challenges. The line between nonprofit ideals and commercial reality has never been more contested.
What will happen next in this landmark trial? Will Musk’s allegations be proven, or will OpenAI’s leadership prevail? The outcome will have significant implications for the AI industry and its stakeholders.
Sources: MIT Tech Review, [one other verifiable publication]
Image prompt: A courtroom scene with Elon Musk, Sam Altman, and Greg Brockman in the foreground, with a packed courtroom in the background, under harsh fluorescent lighting.


