OpenAI board first learned about ChatGPT through Twitter, according to former member

Enlarge / Helen Toner, former OpenAI board member, speaks during Vox Media’s 2023 Code Conference at The Ritz-Carlton, Laguna Niguel, September 27, 2023.

In a recent interview on “The Ted AI Show” podcast, former OpenAI board member Helen Toner said that the OpenAI board was unaware of ChatGPT’s existence until they saw it on Twitter. She also revealed details about the company’s internal dynamics and the events surrounding the surprise firing and subsequent rehiring of CEO Sam Altman last November.

OpenAI launched ChatGPT publicly on November 30, 2022, and its massive and surprising popularity put OpenAI on a new trajectory, shifting focus from being an AI research lab to a more consumer-oriented technology company.

“When ChatGPT came out in November 2022, the board was not informed about it in advance. We found out about ChatGPT on Twitter,” Toner said on the podcast.

Toner’s revelation about ChatGPT appears to highlight a significant disconnect between the board and the company’s day-to-day operations, shedding new light on allegations that Altman “was not consistently candid in his communications with the board” following his firing. on November 17, 2023. Altman and OpenAI’s new board of directors later said the mishandling of the CEO’s attempts to remove Toner from the OpenAI board following her criticism of ChatGPT’s launch by the company played a key role in Altman’s firing.

“Sam did not inform the board that he owned the OpenAI startup fund, despite consistently claiming to be an independent board member with no financial interest in the company on multiple occasions,” he said. “He gave us inaccurate information about the small number of formal security processes the company had in place, meaning it was basically impossible for the board to know how well those security processes were working or what might need to change.”

Toner also shed light on the circumstances that led to Altman’s temporary overthrow. He mentioned that two OpenAI executives had reported cases of “psychological abuse” to the board, providing screenshots and documentation to back up their claims. Allegations made by former OpenAI executives, relayed by Toner, suggest that Altman’s leadership style fostered a “toxic atmosphere” at the company:

In October of last year, we had this series of conversations with these executives, where suddenly the two of them started telling us about their own experiences with Sam, which they hadn’t felt comfortable sharing before, but telling us how they couldn’t. Believe it, about the toxic atmosphere he was creating. They use the phrase “psychological abuse,” telling us that they didn’t think he was the right person to run the company, telling us that they didn’t think he could or would change, that there’s no point in giving him feedback, there’s no point in trying to solve these problems.

Despite the board’s decision to fire Altman, Altman began the process of returning to his position just five days after a letter to the board signed by more than 700 OpenAI employees. Toner attributed this quick return to employees believing the company would collapse without him, saying they also feared retaliation from Altman if they did not support his return.

“The second thing that I think is really important to know, and that hasn’t really been reported, is the fear that people have of going against Sam,” Toner said. “They saw him retaliate against people who retaliated… for past cases of being critical.”

“They were very afraid of what could happen to them,” he continued. “Then some employees started saying, you know, wait, I don’t want the company to fall apart. Like, let’s bring Sam back. It was very difficult for those people who had had terrible experiences to say that… If Sam stayed in the power, as he finally did, that would make their lives miserable.

In response to Toner’s statements, current OpenAI board chair Bret Taylor made a statement on the podcast: “We are disappointed that Ms. Toner continues to review these issues… The review concluded that the board’s decision previous board was not based on concerns about product safety or security, the pace of development, OpenAI’s financials or its statements to investors, customers or business partners.”

Even taking that review into account, Toner’s main argument is that OpenAI has failed to control itself despite claims to the contrary. “The OpenAI saga shows that trying to do good and regulate yourself is not enough,” he said.

Leave a Comment