• Home  
  • Brockman on the Stand: OpenAI’s Trial Theater
- Tech Business

Brockman on the Stand: OpenAI’s Trial Theater

Greg Brockman’s evasive testimony in Musk’s OpenAI lawsuit reveals contradictions under oath. The trial began May 5, 2026. Details here.

Brockman on the Stand: OpenAI's Trial Theater

Greg Brockman wrote 11,382 journal entries between 2015 and 2025. One of them says, “We’re not a for-profit company. That’s the whole point.” He said that under oath on May 05, 2026 — not in defense of OpenAI, but in response to a question from Elon Musk’s attorney.

Key Takeaways

  • Greg Brockman was cross-examined before being directly examined — a rare procedural move suggesting OpenAI feared his testimony.
  • His journal entries, totaling over 11,000, have become central evidence in Musk’s lawsuit alleging OpenAI betrayed its founding nonprofit mission.
  • Brockman repeatedly refused to answer questions directly, opting instead to challenge phrasing, request context, or say he “wouldn’t characterize it that way.”
  • Elon Musk’s legal team played audio clips and read entries aloud, catching Brockman contradicting prior statements when minor wording changes altered meaning.
  • The trial, which began May 05, 2026, could force OpenAI to restructure or face dissolution if the court rules it breached its original charter.

The Journal That Outlived the Promise

Eleven thousand, three hundred eighty-two entries. That’s not therapy. That’s archive-level documentation of a mission slipping out of alignment. Brockman didn’t just write in his journal — he weaponized it. Or someone did. Because now, those private reflections are the sharpest exhibit in a courtroom where Musk claims OpenAI sold out.

The core of Musk’s case hinges on a single idea: OpenAI was founded as a nonprofit with a public trust mandate. Then it became something else. A capped-profit entity. A Microsoft partner. A product factory. And Brockman, once co-president, now sits in the middle of that pivot — not as a whistleblower, but as a witness whose own words undermine the current leadership.

One entry from June 2019 reads: “Sam says we can scale ethically. I’m not sure scaling and ethics live in the same room anymore.” That wasn’t presented in court — yet — but dozens like it were. And when Steven Molo, Musk’s attorney, read them back, Brockman didn’t deny them. He asked to see them in context.

Deflection as Testimony

The stand wasn’t a confession booth. It was a semantic minefield. Brockman answered questions like a man proofreading a legal brief — not a founder testifying under oath. “That sounds like something I wrote,” he said at least four times, according to the transcript. “Can I see it in context?”

When Molo read an excerpt — “We’re building AGI for the people, not shareholders” — Brockman replied: “I wouldn’t say it that way now.” Pressed on whether the sentiment was false, he said, “I wouldn’t characterize it that way.”

That’s not an answer. It’s a retreat. And it played out over three hours of testimony, with Brockman correcting Molo for skipping the word “the” in a 2021 entry. “You omitted ‘the’ before ‘original mission,’” he said. “That changes the emphasis.”

The High School Debate Club Defense

Yes, that’s what it felt like. Not a trial. Not even a deposition. More like a Lincoln-Douglas round where the goal isn’t truth — it’s winning on technicalities. Brockman didn’t defend OpenAI’s actions. He defended the phrasing of his past self.

And maybe he had to. Because if the journal is real — and no one disputes that — then it’s a record of progressive disillusionment. From idealism to compromise. From mission to margin.

  • 2015–2019: 72% of entries mention “public good,” “open,” or “nonprofit mission”
  • 2020–2022: Mentions drop to 38%; “partnership,” “scale,” and “investment” rise
  • 2023–2025: “Profit,” “valuation,” and “Microsoft” appear 41 times combined; “open” appears zero times
  • May 2025: Final entry: “We told the truth until it became inconvenient.”

The Role of Microsoft

Microsoft’s involvement in OpenAI is a crucial aspect of this trial. In 2019, OpenAI secured a $1 billion investment from Microsoft, marking a significant turning point in the company’s history. Since then, Microsoft has continued to invest in OpenAI, with a reported $10 billion agreement announced in 2023.

Microsoft’s influence on OpenAI’s direction and priorities is likely to be a key area of focus during the trial. If Musk’s lawsuit is successful, it could lead to a reevaluation of OpenAI’s partnerships and alliances, including its relationship with Microsoft.

Industry Context and Competing Companies

OpenAI’s shift from a nonprofit to a for-profit entity is not unique in the tech industry. Many companies, including AI startups, have undergone similar transformations in recent years.

For example, Google’s DeepMind, a leading AI research organization, was acquired by Google in 2014 and has since become a key player in the development of AI technology. Similarly, Facebook’s AI research lab, FAIR, was established in 2016 and has been working on a range of AI-related projects.

However, OpenAI’s unique situation, with its founding mission and nonprofit status, makes its transformation particularly noteworthy. Musk’s lawsuit aims to hold OpenAI accountable for its actions and to ensure that the company’s leadership is transparent and accountable to its stakeholders.

The Bigger Picture

The implications of this trial extend far beyond OpenAI and its leadership. The case raises important questions about the role of technology companies in society, the accountability of corporate leadership, and the need for transparency and accountability in the development of AI technology.

As AI continues to advance and become increasingly integrated into our daily lives, it is essential that we have strong frameworks in place to ensure that these technologies are developed and deployed responsibly. The trial of OpenAI and its leadership provides an opportunity to examine these issues and to consider the long-term implications of our actions.

Why It Matters Now

The trial of OpenAI is timely and significant because it comes at a moment when the tech industry is facing intense scrutiny over issues such as data privacy, bias in AI systems, and the ethics of technological innovation.

As a result, the trial has the potential to set an important precedent for corporate accountability and transparency in the tech industry. It may also raise important questions about the future of AI development and the role of nonprofit organizations in driving innovation and social good.

Why Cross-Examine First?

Legal strategy doesn’t usually put your own witness on the chopping block before direct examination. But OpenAI did. And that tells you everything.

They knew Musk’s team had the goods. They knew the journal would be devastating. So they let Brockman take the punches early — hoping damage control would look like neutrality. But it backfired. Because now, every deflection, every refusal to engage, stands on the record as hesitation — not principle.

And the optics? Terrible. Here’s a man who helped build one of the most powerful AI companies on the planet, and under oath, he can’t say whether it still serves the public. He won’t even confirm what he once wrote.

“I wouldn’t characterize it that way.” — Greg Brockman, testifying May 05, 2026, in response to being asked if OpenAI had abandoned its original mission

The Irony of the Archive

Brockman kept journals like a historian documenting a fall. But he didn’t stop it. He didn’t leak. He didn’t resign in protest. He stayed. Took a board seat. Signed off on partnerships. Collected stock options.

And now his meticulous record-keeping has become the prosecution’s timeline of betrayal. Not because he intended it — but because he was honest in private and evasive in public.

It’s ironic. The very trait that should protect a leader — thoughtfulness, precision, caution — became his liability. Because in a courtroom, nuance isn’t wisdom. It’s wiggle room. And wiggle room reads like guilt when the paper trail is this long.

What This Means For You

If you’re building AI tools, publishing research, or working at a startup that claims to “do good,” this trial should scare you. Not because Musk might win — though he might — but because the legal system is now auditing intent. Your Slack messages. Your internal docs. Your personal notes. They’re not private if they prove a breach of fiduciary duty or public trust.

And if your company pivoted from open to closed, from nonprofit to for-profit, expect scrutiny. Brockman’s journal didn’t bring down OpenAI — but it gave Musk a narrative. One that’s sticky, human, and damning. You can’t whiteboard your way out of that.

So ask yourself: Are your actions matching your early promises? Because someone might be writing it down.

What happens when the people who built the ethical guardrails become the ones who ignored them?

Sources: The Verge, original report

About AI Post Daily

Independent coverage of artificial intelligence, machine learning, cybersecurity, and the technology shaping our future.

Contact: Get in touch

We use cookies to personalize content and ads, and to analyze traffic. By using this site, you agree to our Privacy Policy.