Ex-OpenAI staff calls for ‘right to warn’ about AI risks without retaliation

On Tuesday, a group of former OpenAI and Google DeepMind employees published an open letter calling on AI companies to commit to principles that allow employees to raise concerns about AI risks without fear of retaliation. The letter, titled “A right to warn about advanced artificial intelligence,” has so far been signed by 13 people, including some who chose to remain anonymous due to concerns about possible repercussions.

The signatories argue that while AI has the potential to bring benefits to humanity, it also poses serious risks including “further entrenchment of existing inequalities, manipulation and misinformation, and loss of control of autonomous AI systems.” , which could result in human extinction.”

They also claim that AI companies possess substantial non-public information about the capabilities, limitations and risk levels of their systems, but currently have only weak obligations to share this information with governments and none with civil society.

Non-anonymous signatories to the letter include former OpenAI employees Jacob Hilton, Daniel Kokotajlo, William Saunders, Carroll Wainwright and Daniel Ziegler, as well as former Google DeepMind employees Ramana Kumar and Neel Nanda.

The group calls on AI companies to commit to four key principles: do not enforce agreements that prohibit criticism of the company for risk-related concerns, facilitate an anonymous process for employees to raise concerns, support a culture of open criticism and not retaliate against employees. that publicly share sensitive risk-related information after other processes have failed.

In May, a Vox article by Kelsey Piper raised concerns about OpenAI’s use of restrictive confidentiality agreements for outgoing employees, which threatened to revoke vested equity if former employees criticized the company. OpenAI CEO Sam Altman responded to the allegations, stating that the company had never recovered the acquired capital and would not do so if employees refused to sign the separation agreement or non-disparagement clause.

But critics remained dissatisfied, and OpenAI soon made a public U-turn on the issue, saying it would remove the non-disparagement clause and capital recovery provisions from its separation agreements, acknowledging that such terms were inappropriate and contrary to its stated values. of the company. Transparency and accountability. That move by OpenAI is likely what made the current open letter possible.

Dr. Margaret Mitchell, an AI ethics researcher at Hugging Face who was fired from Google in 2021 after raising concerns about diversity and censorship within the company, spoke to Ars Technica about the challenges whistleblowers face in the technology industry. “In theory, you can’t legally retaliate against you for whistleblowing. In practice, it appears you can,” Mitchell said. “The laws support the goals of big business at the expense of workers. They do not favor workers.”

Mitchell highlighted the psychological cost of seeking justice against a large corporation, saying: “Basically, you have to give up your career and your psychological health to seek justice against an organization that, by virtue of being a company, has no feelings and does have the resources to destroy you.” He added: “Remember that it is up to you, the person fired, to make the case that you were retaliated against (a single person, with no source of income after being fired) against a trillion-dollar corporation with an army of lawyers who specialize in harming workers in exactly this way.”

The open letter has garnered support from prominent figures in the AI ​​community, including Yoshua Bengio, Geoffrey Hinton (who has warned about AI in the past), and Stuart J. Russell. It’s worth noting that AI experts like Meta’s Yann LeCun have taken issue with claims that AI poses an existential risk to humanity, and other experts feel that the “AI takeover” talking point is a distraction from the current harms of AI, such as prejudices and dangerous hallucinations.

Even with disagreement over what concrete harms may come from AI, Mitchell believes the concerns raised in the letter underscore the urgent need for greater transparency, oversight and protection for employees who speak openly about potential risks: “While I appreciate and am According to this letter,” it says, “There needs to be significant changes to laws that disproportionately support unfair practices by large corporations at the expense of workers doing the right thing.”

Leave a Comment