Observing OpenAI’s Affair With Musk: A Legal Drama Unfolds

In a recent turn of events that has the tech world abuzz, Elon Musk, who co-founded OpenAI in 2015, has since established X.AI, a new AI company, and just filed a lawsuit highlighting the evolving ethos of OpenAI, the entity behind ChatGPT. Initially celebrated for its dedication to leveraging artificial general intelligence (AGI) for the public good, OpenAI’s recent partnership with MicrosoftMSFT
, which included the latter getting observer rights in the former’s board, has sparked controversy, which we discussed a few weeks ago and has raised much regulatory attention. Musk’s recent lawsuit articulates a profound sense of disappointment, arguing that OpenAI has strayed far from its founding principles of open-source innovation and societal betterment in pursuit of commercial gain.

SAN FRANCISCO, CA – OCTOBER 06: Elon Musk and Sam Altman speak onstage during “What Will They Think … [+] of Next? Talking About Innovation”. (Photo by Michael Kovac/Getty Images for Vanity Fair)

Getty Images for Vanity Fair

As these events unfold, the landscape of AI governance and development becomes increasingly complex, with Musk’s lawsuit against OpenAI sitting at the center of a web of legal, ethical, and regulatory challenges. The crux of Musk’s argument hinges on what he perceives as a fundamental shift in OpenAI’s trajectory, deviating from its mission to democratize AGI towards a more profit-oriented approach under Microsoft’s influence. Musk suggests that this pivot not only serves Microsoft’s financial interests but also undermines the altruistic foundation upon which OpenAI was established, and calls for a return to OpenAI’s open-source roots, seeking an injunction against the commercial exploitation of AGI technology.

The lawsuit includes a slew of claims such as breach of contract, promissory estoppel, breach of fiduciary duty, unfair competition, and a somewhat enigmatic “accounting” claim. The lawsuit names two individuals, Sam Altman and Gregory Brockman – presumably San Francisco residents – as defendants. Additionally, eight entities, including a non-profit, a limited partnership, and six limited liability companies, all registered in Delaware, are named in the lawsuit. Musk’s legal team contends that the claims made in the case are rooted in actions taken by individuals as agents of Delaware entities. This aligns with Musk’s broader strategy, positioning the legal dispute within the context of jurisdictional challenges and contractual ambiguities. Indeed, the lawsuit, which was filed in San Francisco County rather than Delaware, arguably reflects Musk’s discord with Delaware courts, in addition to the recognition that Delaware courts, operating under the internal affairs doctrine, would swiftly discern his intent to stifle competition in the industry.

State of Delaware State, US. Detail from the World Map.


A central issue in the case revolves around jurisdiction, and while it is established over Altman and Brockman due to their residency in the county, Musk’s lawyers argue that the claims relate to actions undertaken as agents of the Delaware entities. Concerns have been raised about potential clauses in the corporate charters of these entities that mandate fiduciary disputes to be resolved in Delaware courts. Delaware’s internal affairs doctrine further complicates the matter, asserting that Chancery, not California Superior Court, should have jurisdiction over cases involving the internal affairs of Delaware entities. This presents a formidable challenge for OpenAI in pursuing the lawsuit in San Francisco.

Finally, a critical element of the legal dispute revolves around the viability of breach of contract and promissory estoppel claims. Musk’s legal team questions the enforceability of contracts that seek to bind individuals to act for the “betterment of humanity,” citing the subjective nature of such a metric. Clearer evidence for these contracts, however, or documents to back some of the allegations might be needed, as described by Matt Levine, who pointed at where Musk arguably finds contractual proof for the promise to act in a way that benefits humanity, and Musk’s historic agreement (and funding) to advance that goal.

Contracts could be used a evidence.


Adding to the intricate narrative surrounding Elon Musk’s lawsuit against OpenAI, several events have thickened the plot in recent months, illustrating the multifaceted nature of this dispute. Firstly, last year, Musk, along with other tech leaders, endorsed an open letter advocating for a six-month moratorium on the development of advanced AI models. This pause was arguably proposed to allow both regulators and corporations to evaluate potential risks associated with rapidly evolving AI technologies, highlighting growing concerns over the pace and direction of AI advancements. Nevertheless, arguably, Musk’s advocacy for a development pause in AI might have been driven by self-interest – to afford him additional time to advance his own AI endeavors.

Secondly, the temporary removal of Sam Altman from his leadership role at OpenAI in November 2023, followed by his swift reinstatement—a sequence of events significantly influenced by Microsoft, is a noteworthy example of the intricate dynamics of corporate governance in the AI industry. Microsoft, which holds a 49% controlling interest in OpenAI and has notably obtained a right to observe the board’s proceedings, played a pivotal role in facilitating Altman’s return. These board observer rights represent a key issue in OpenAI’s case, and we have recently discussed them as part of a broader academic study that also leverages empirical data to examine the effects of board observers as a governance mechanism that impacts the strategic direction of startups and tech firms: “Board Observers: Shadow Governance in the Era of Big Tech. “

Thirdly, the Securities and Exchange Commission (SEC) has initiated an investigation into whether OpenAI misled its investors during the tumultuous period surrounding Altman’s dismissal and subsequent reappointment. The investigation by the SEC, as described by some sources to The Wall Street Journal, seems to be a direct response to the former board’s allegations against Altman in their November 2023 statement. This probe emerges amidst Altman’s efforts to secure funding for a new chip venture and follows a recent deal that reportedly values the Microsoft-backed startup at $80 billion or more.

In this photo illustration an OpenAI logo seen displayed on a smartphone screen and a Microsoft … [+] logos in the background (Photo by Nikolas Kokovlis/NurPhoto via Getty Images)

NurPhoto via Getty Images

Muks’s lawsuit underscores a broader debate on the stewardship of AI development and the ethical implications of proprietary vs. open-source models. It also touches upon what critics describe as a convoluted governance structure within OpenAI. This situation serves as a prime illustration of the alleged disorderliness within the organization’s governance framework.

Referencing the corporate structure, Professor Dana Brakman Reiser from Brooklyn Law School, who is a globally recognized expert in the law at the intersection of business, charity, and the law of social enterprises, and has also co-author the book SOCIAL ENTERPRISE LAW: TRUST, PUBLIC BENEFIT, AND CAPITAL MARKETS said that “nonprofits use for-profit subsidiaries to develop businesses all the time, but the magnitude of OpenAI’s business and its continued additions to its structure could put its nonprofit mission at risk. Ultimately, nonprofit law relies on those sitting on boards of directors to meet their fiduciary obligations to serve the charitable missions their organizations undertake. And it’s fair to worry whether OpenAI’s nonprofit board is doing so. But it’s unlikely Musk has standing to challenge them. Directors can police each other’s compliance with these duties, as can attorneys general, but not donors, potential competitors, or the public more broadly – for good reason. It would be deeply destabilizing to subject every nonprofits’ directors to suits from any concerned member of the public claiming they could better serve their organizations’ missions.”

Non Profit. Magnifying glass.


Somewhat similarly, Professor Patrick Corrigan at Notre Dame Law School, who has a forthcoming paper discussing related issues, has argued that “the corporate governance problem for social enterprises is that a governance failure arises from the costs of coordinating around unpriced objectives, exacerbated by third parties who benefit by disrupting cooperation.” Against a backdrop of widespread disagreements among stockholders about whether, and how much, to value social goods, the transaction costs of coordinating around unpriced objectives helps to explain why so many social enterprises experience “mission drift” away from a social purpose towards profit-maximization.

How will this legal drama end? This case transcends a mere lawsuit centered on breach of contract, negligence, and a failure to adhere to fiduciary duties between potential competitors. It mirrors the escalating tensions within the AI community regarding the direction and governance of technology capable of transforming society. Moreover, it raises critical questions about the future oversight and strategic guidance of AI technologies. Stay tuned!

A rear view of a man standing with his hand on top of his head as he stands below the large letters, … [+] “AI” which stand for artificial intelligence. He is concerned about the unknown implications of the rollout of artificial intelligence on our society.


Read More