Elon Musk Testifies in Oakland Trial Against OpenAI CEO Sam Altman, President Greg Brockman
Key Takeaways
- Musk testifies in Oakland lawsuit alleging Altman and Brockman steered OpenAI to for-profit status.
- He says leadership hijacked OpenAI and duped him into funding it.
- Musk testified for over seven hours across three days in Oakland.
Trial in Oakland
Elon Musk testified in a trial in Oakland, California, in which he is suing OpenAI CEO Sam Altman and president Greg Brockman, alleging they abandoned OpenAI’s original nonprofit mission in favor of commercial interests.
“Skip to main content Apr 29, 2026 - Technology Musk casts himself as AI's good guy in testimony vs”
In the first week of the landmark dispute, Musk “took the stand in a crisp black suit and tie” and argued that Altman and Brockman deceived him into bankrolling the company.

Musk framed his involvement as charitable giving and AI safety, telling the jury, “I was a fool who provided them free funding to create a startup,” and describing himself as having cofounded OpenAI in 2015 with Altman and Brockman.
He said he gave OpenAI “$38 million of essentially free funding,” which he said was used to create what would become “an $800 billion company.”
The courtroom in Oakland was packed with “armies of lawyers carrying boxes of exhibits,” with “journalists typing away at their laptops,” and “a handful of concerned OpenAI employees,” while outside “protesters lined the streets” urging people to quit ChatGPT and boycott Tesla.
Musk is asking the court to remove Altman and Brockman from their roles and to unwind the restructuring that allowed OpenAI to operate a for-profit subsidiary.
The trial’s stakes, as described by MIT Technology Review, include the possibility that the outcome could “upend OpenAI’s race toward an IPO at a valuation approaching $1 trillion,” while xAI is expected to go public as part of SpaceX “as early as June” at a target valuation of $1.75 trillion.
Safety warnings and courtroom sparring
Musk’s testimony repeatedly returned to AI safety, with the trial featuring direct arguments about who should be the “steward of AI safety.”
In MIT Technology Review’s account of early testimony, Musk said he cofounded OpenAI as a “counterbalance to Google,” which he described as leading the AI race, and he told the jury that when he asked Google cofounder Larry Page what happens if AI tries to wipe out humanity, Page said, “That will be fine as long as artificial intelligence survives.”

Musk later told the jury, “The worst-case scenario is a Terminator situation where AI kills us all,” and in other coverage he warned, “We all could die,” while the court limited related testimony.
The Intercept described Musk’s warning as central to his lawsuit about safety, quoting Musk’s testimony that “It could kill us all,” and “We don’t want to have a ‘Terminator’ outcome.”
OpenAI’s lawyer William Savitt challenged Musk’s framing, arguing that Musk was not a “paladin of safety and regulation,” and Savitt cross-examined Musk by pointing to xAI’s legal action in April over an AI law designed to prevent algorithmic discrimination.
When Musk’s lawyer Steven Molo argued that OpenAI could not be trusted to build AI safely, Judge Yvonne Gonzalez Rogers responded that “Despite these risks, your client is creating a company that’s in the exact space,” referring to xAI.
The judge then snapped when lawyers began talking over each other, saying, “This is not a trial on whether or not artificial intelligence has damaged humanity,” and she added, “I suspect there’s plenty of people who don’t want to put the future of humanity in Mr. Musk’s hands.”
Musk’s funding story and xAI links
Beyond safety rhetoric, Musk’s testimony in Oakland centered on how he says he helped create OpenAI and how he claims he was later deceived.
MIT Technology Review reported that Musk “admitted that xAI distills OpenAI’s models,” describing that his own AI company xAI, “which makes the chatbot Grok,” uses OpenAI’s models to train its own.
Musk also told the jury, “I was a fool who provided them free funding to create a startup,” and he argued that when he cofounded OpenAI in 2015, he was donating to a nonprofit developing AI “for the benefit of humanity, not to make the executives rich.”
He said, “I gave them $38 million of essentially free funding,” and he added that it was used to create what would become “an $800 billion company.”
News Mobile described Musk’s broader narrative of authorship and recruitment, saying he testified that he personally founded “the idea, name, initial funding, and key recruitment, including hiring top researcher Ilya Sutskever.”
That same account said Musk argued the organization “would not exist without his contributions,” and it said he claimed his connections helped secure support from Microsoft CEO Satya Nadella and Nvidia CEO Jensen Huang.
In MIT Technology Review’s account, Musk also said he was “full of remorse,” and he described his lawsuit as an attempt to “save OpenAI’s mission to develop AI safely by restoring the company to its original nonprofit structure.”
Competing narratives of motives
The trial’s central dispute is not only what OpenAI became, but what Musk says his own motivations were, and how OpenAI’s lawyers portray them.
MIT Technology Review described Musk’s argument that he was trying to save OpenAI’s mission by restoring the company to its original nonprofit structure, while OpenAI’s lawyer William Savitt countered that Musk was “never committed to OpenAI being a nonprofit” and instead was suing to undermine his competitor.

Axios similarly described Musk’s self-portrait as an AI safety advocate, saying Musk “portrayed himself in court this week as a leading advocate for AI safety — in contrast to what he described as the profit-consumed OpenAI that he's suing.”
Axios also reported that Musk argued “the only way to keep AI from "killing us all" was to keep it out of the hands of anyone trying to make money on it.”
Axios further said Musk later acknowledged that his own AI company, xAI, is a for-profit, and it tied that to the fact that “SpaceX recently acquired xAI and the rocket company is in an SEC quiet period ahead of a planned public offering.”
In MIT Technology Review, Savitt stood at the lectern and argued that Musk was not a “paladin of safety and regulation,” and Savitt’s cross-examination included pointing to xAI’s April lawsuit over an AI law designed to prevent algorithmic discrimination.
In the same MIT Technology Review account, Judge Yvonne Gonzalez Rogers told the lawyers, “I suspect there’s plenty of people who don’t want to put the future of humanity in Mr. Musk’s hands,” underscoring the court’s focus on the competing safety claims.
What comes next
The sources describe multiple forward-looking pressures tied to the trial, including the next phase of testimony and potential regulatory scrutiny.
Axios said, “Musk's cross-examination continues Thursday in Oakland, California,” placing the next procedural step in the same courthouse setting.

MIT Technology Review described that the outcome could “upend OpenAI’s race toward an IPO at a valuation approaching $1 trillion,” while also noting that xAI is expected to go public “as early as June” at a target valuation of $1.75 trillion as part of SpaceX.
News Mobile said the trial highlighted tensions over AI extinction risks and that the court limited related testimony, while also emphasizing Musk’s framing of the lawsuit as defense of charitable giving and AI safety.
The Intercept, meanwhile, placed Musk’s safety warnings in a broader context of AI being used in military and targeting systems, quoting Amoh Toh that “Existing AI models are already pushing policymakers and militaries toward nuclear escalation — there’s a real danger of Skynet-like outcomes even without a Skynet-style takeover.”
It also quoted a clause about Google’s AI services to the Pentagon, stating, “We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight,” and it described that deal as including “classified workloads.”
Gotrade added a separate regulatory dimension, saying “the United States Senate is widening its AI safety probe into major technology firms,” and it claimed the probe could affect Tesla, Microsoft, and NVIDIA.
More on Technology and Science

Google Bought British Deepmind, Driving European Tech Value From Europe to the U.S.
11 sources compared

CISA Adds CVE-2026-31431 Copy Fail Flaw to KEV, Enabling Linux Root Privilege Escalation
27 sources compared

Muséum National D'Histoire Naturelle Explains Ichthyosaurs’ Dolphin-Like Marine Reptile Evolution
10 sources compared

Arizona, California, and Nevada Submit Colorado River Water-Saving Proposal to U.S. Interior Department
18 sources compared