The OpenAI Files: Ex-staff claim profit ambition, betray AI safety

The OpenAI Files: Ex-staff claim profit ambition, betray AI safety

Once, a wildly idealistic goal in the tech industry — building artificial intelligence for the sake of humanity — is now undergoing a weighing of the hearts.

A new report called “The OpenAI Files” paints a damning picture of a widening disconnect in the company once hailed as Silicon Valley’s moral compass. Ex-employees, senior scientists, and even co-founders have come forward, revealing a different picture—a company abandoning its principles in its quest for unfettered power and profit.

At the center of this crisis is a growing belief among insiders that OpenAI is sidelining safety and ethics, just when they are needed most.

From Public Good to Private Gain: The Thing About the Profit Cap

When OpenAI was founded in 2015, they began with a clear mission: AI should be managed by a public non-profit so that if or when AGI was achieved, the bounty could be shared broadly and not hoarded by a few investors.

In order to strengthen that assertion, OpenAI instituted an unusual profit cap: investors could earn a return, but not unlimited profit. This would theoretically help ensure that OpenAI remained aligned with the public good, as a pressure relief valve for burgeoning commercialization.

But, per the internal documents and interviews included in The OpenAI Files, that movement to protect and ensure the public good is in jeopardy.

“The nonprofit mission was a promise to do the right thing when the stakes got high,” former OpenAI technical staff member Carroll Wainwright said. “Now that the stakes are high, the nonprofit structure is being abandoned. The promise was ultimately hollow.”

According to sources within the company, leadership is now actively working toward ways to eliminate or bypass the profit cap altogether, fundamentally abandoning the non-profit purpose (according to our sources, due to pressure from VCs seeking incredibly high multi-billion-dollar returns). If they are successful, this decision has the potential to radically alter the course of one of the largest AI companies in the world. 

The OpenAI Files: Ex-staff claim profit ambition, betray AI safety

Building Distrust: Altman Under Fire

Much of that dissatisfaction centers around OpenAI CEO Sam Altman, the relatable person who first became the face of optimism for AI and, eventually, the focal point of dissatisfaction within OpenAI. Moreover, he has been the face and voice of “open”, “safe,” and responsible development of AI and AGI.

Altman has always been, and continues to be, a controversial executive. It is no wonder that previous colleagues from his previous companies would feel that he is “delusional and chaotic”. These vague, but frequent claims have also been experienced to some extent at OpenAI.

Even Ilya Sutskever, the co-founder of OpenAI and quite possibly the best AI researcher alive, has reportedly lost trust and confidence in Altman’s decision-making. After he departed from the company in 2024 to pursue his own company, he expressed startling views about Altman’s leadership potential:

“I don’t think Sam has the capabilities to be the guy who has his finger on the button for AGI,” he told his closest colleagues.

Mira Murati, the previous CTO of OpenAI – and now somewhat separated from leadership within the company – also exhibited her reservations.

“I don’t feel good that Sam is taking us to AGI”, her comments also insinuated a degree of dysfunctional management style- specifically that Altman tells people what they want to hear, often derailing their work later on.

Former board member Tasha McCauley reiterated those concerns, stating that such behavior should be disqualifying at a company whose work could one day dominate intelligence that is more powerful than humans.

Safety on the Sidelines: The Productization Problem

As OpenAI churns out more and more polished products — from ChatGPT to GPT-4 and beyond — insiders say the company’s long-term safety agenda has become increasingly sidelined.

Earlier this year, Jan Leike, who led OpenAI’s Superalignment team (which is responsible for aligning future AGI systems with human values), resigned from the company — also with a sober warning — that research critical to AI safety was underfunded, under-supported, and deprioritized.

“We were sailing against the wind,” Leike posted. “Safety culture and processes have been put on the back burner in favor of shiny products.”

This has rung alarm bells for researchers worried that OpenAI is barreling forward with advanced models without understanding — or fully mitigating — their risk.

The OpenAI Files: Ex-staff claim profit ambition, betray AI safety

Security Concerns and Insider Warnings

However, safety liabilities extend beyond algorithms. Former researcher William Saunders testified to the U.S. Senate that aside from authority over AI, OpenAI had blatant security vulnerabilities, stating that in his time there, hundreds of engineers had illegitimate access to the company’s most powerful models, like GPT-4. Certainly, to have such weak controls at an organization that is creating a technology that can influence markets, governments, and public opinion falls into the category some experts call “national security blind spot.”

 The Whistleblower’s Roadmap: Reestablishing Trust

While the former insiders have left, they have not faded away. Instead, they have issued a bold ultimatum: a blueprint for restoring OpenAI’s original mission before it is too late.

Their demands are:

  1. Restore power (again) to the nonprofit board and give them veto power over safety decisions.
  2. Conduct an independent investigation of Sam Altman, its CEO, and its corporate behavior.
  3. Restore real whistleblower protections to ensure employees can report concerns about safety without fear of retaliation, harassment, or financial ruin.
  4. Maintain the profit cap for investors and ensure that AI development remains grounded in the public benefit and not unlimited private benefit.
  5. Classify independent safety oversight to ensure OpenAI does not get to “mark its homework”.

What Should You Keep in Mind: OpenAI Does Not Act Like An Ordinary Tech Company

This is not a narrative about corporate strife or tech gossip. It is a cautionary tale on how fragile safety protocols can be when billions are on the line, particularly at a company developing one of the most transformational technologies in history.

As former board member Helen Toner warned,

“Internal guardrails are fragile when money is on the line.”

And now, those who understand OpenAI most are telling the world, those guardrails have collapsed.

The OpenAI Files: Ex-staff claim profit ambition, betray AI safety

Who Do We Trust to Build Our Future?

With OpenAI barreling towards the unknowns of building ever more powerful models with less transparency and fewer guardrails, the world is left to wonder:

Whose finger is on the button?

The answer underpins our future not only with AI, but of humanity itself.

Leave a Comment