What happened to OpenAI’s long-term AI venture team?

Benj Edwards

In July last year, OpenAI announced the formation of a new research team that would prepare for the arrival of a super-intelligent artificial intelligence capable of outwitting and dominating its creators. Ilya Sutskever, chief scientist at OpenAI and one of the company’s co-founders, was named co-leader of this new team. OpenAI said the team would receive 20 percent of its computing power.

Now OpenAI’s “super alignment team” no longer exists, the company confirms. This comes after the departure of several researchers involved, Tuesday’s news that Sutskever was leaving the company, and the resignation from the other co-leader of the team. The group’s work will be absorbed into other OpenAI research efforts.

Sutskever’s departure made headlines because, although he helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of four board members who fired Altman in November. Altman was reinstated as CEO five chaotic days after a mass revolt by OpenAI staff and the negotiation of a deal that saw Sutskever and two other company directors leave the board.

Hours after Sutskever’s departure was announced on Tuesday, Jan Leike, the former DeepMind researcher who was the other co-leader of the super-alignment team, published in X that he had resigned.

Neither Sutskever nor Leike responded to requests for comment. Sutskever did not offer an explanation for his decision to leave, but offered support for OpenAI’s current path. in a post on X. “The company’s journey has been nothing short of miraculous, and I am confident that OpenAI will create AGI that is safe and beneficial” under its current leadership, he wrote.

Leike published a thread in x on Friday explaining that his decision was due to a disagreement over the company’s priorities and the amount of resources that were being allocated to his team.

“I have been at odds with OpenAI leadership over the company’s core priorities for quite some time, until we finally reached a breaking point,” Leike wrote. “For the last few months my team has been sailing against the wind. “At times we were struggling with computing and it was becoming more and more difficult to do this crucial research.”

The dissolution of OpenAI’s super-alignment team adds to recent evidence of a restructuring within the company in the wake of last November’s governance crisis. Two researchers on the team, Leopold Aschenbrenner and Pavel Izmailov, were fired for leaking company secrets, The Information reported last month. Another member of the team, William Saunders, left OpenAI in February, according to an Internet forum post in his name.

Two other OpenAI researchers working on AI policy and governance also appear to have recently left the company. Cullen O’Keefe left his position as political boundaries research leader in April, according to LinkedIn. Daniel Kokotajlo, an OpenAI researcher who has co-authored several papers on the dangers of more capable AI models, “left OpenAI due to loss of confidence that it would behave responsibly in the age of AGI,” according to a post on an Internet forum in your name. . None of the researchers who apparently left responded to requests for comment.

OpenAI declined to comment on the departures of Sutskever or other members of the super-alignment team, or the future of their work on long-term AI risks. Research into the risks associated with more powerful models will now be led by John Schulman, who co-leads the team responsible for fine-tuning AI models after training.

The super-alignment team wasn’t the only team thinking about how to keep AI under control, although it was publicly positioned as the main one working on the more distant version of that problem. The blog post announcing the super-alignment team last summer said: “We currently do not have a solution to direct or control a potentially super-intelligent AI and prevent it from going rogue.”

OpenAI’s charter obliges it to safely develop so-called artificial general intelligence, or technology that rivals or surpasses humans, safely and for the benefit of humanity. Sutskever and other leaders have often spoken of the need to proceed with caution. But OpenAI has also been early in developing and releasing experimental AI projects to the public.

OpenAI was once unusual among AI labs notable for the enthusiasm with which research leaders like Sutskever talked about creating superhuman AI and the potential for such technology to turn against humanity. That kind of pessimistic talk about AI became much more widespread last year after ChatGPT made OpenAI the most prominent and closely watched tech company on the planet. As researchers and policymakers wrestled with the implications of ChatGPT and the prospect of much more capable AI, it became less controversial to worry about AI harming humans or humanity as a whole.

Since then, the existential angst has cooled (and AI has yet to take another big leap), but the need to regulate it remains a hot topic. And this week OpenAI introduced a new version of ChatGPT that could once again change people’s relationship with technology in new, powerful, and perhaps problematic ways.

Sutskever and Leike’s departures come shortly after OpenAI’s latest big reveal: a new “multimodal” AI model called GPT-4o that allows ChatGPT to see the world and converse in a more natural, human-like way. A live-streamed demo showed the new version of ChatGPT mimicking human emotions and even attempting to flirt with users. OpenAI has said it will make the new interface available to paid users in a couple of weeks.

There is no indication that the recent departures have anything to do with OpenAI’s efforts to develop more human-like AI or launch products. But the latest developments raise ethical questions around privacy, emotional manipulation and cybersecurity risks. OpenAI maintains another research group called the Readiness Team that focuses on these topics.

This story originally appeared on wired.com.

Leave a Comment