Leopold Aschenbrenner, a former security researcher at OpenAI, says the company's security practices were “grossly insufficient.” In a video interview with Dwarkesh Patel published Tuesday, Aschenbrenner spoke about internal conflicts over priorities, suggesting a shift in focus toward rapid growth and deployment of AI models at the expense of security.
He also said he was fired for expressing his concerns in writing.
In a vast four-hour program conversation, Aschenbrenner told Patel that he wrote an internal memo last year detailing his concerns and circulated it among reputable experts outside the company. However, after a major security incident occurred a few weeks later, he said he decided to share an updated memo with a few board members. It was quickly released from OpenAI.
“What might also be useful context are the types of questions they asked me when they fired me…the questions were about my views on the progress of AI, on AGI, the appropriate level of security for AGI, whether the government should be involved in AGI, whether I and the superalignment team were loyal to the company, and what I did at board events OpenAI admin,” Aschenbrenner said.
AGI, or artificial general intelligence, is when AI meets or exceeds human intelligence in any area, regardless of how it was trained.
Loyalty to the company – or to Sam Altman – emerged as a key factor after his brief ouster: more than 90% of employees signed a letter threatening to resign in solidarity with him. They also popularized the slogan: “OpenAI is nothing without its collaborators”.
“I did not sign the employee's letter at board meetings, despite pressure to do so,” Aschenbrenner recalled.
The superalignment team, led by Ilya Sutskever and Jan Leike, was responsible for developing long-term security practices to ensure that AI remained aligned with human expectations. The departure of prominent members of that team, including Sutskever and Leike, has led to increased scrutiny. The entire team was then disbanded and a new security team was announced… led by CEO Sam Altmanwho is also a member of the board of directors of OpenAI to which he reports.
Aschenbrenner said OpenAI's actions contradicted its public statements about security.
“Another example is when I was raising security concerns: they were telling me that security was our number one priority,” he said. “Invariably, when it came time to invest significant resources or compromise on basic measures, security was not a priority. .”
This is in line with statements from Leike, who said the team was “sail against the wind” and that “safety culture and processes have taken precedence over shiny products” under Altman’s leadership.
Aschenbrenner also expressed concerns about the development of the AGI, emphasizing the importance of a cautious approach, especially as many fear that China will make strong efforts to surpass the United States in research on the 'AGI.
China “is going to make every effort to infiltrate American AI labs, with billions of dollars, thousands of people… (they) are going to try to surpass us,” he said. not only cool products, but also the survival of liberal democracy.”
Just a few weeks ago, it was revealed that OpenAI required its employees to sign abusive nondisclosure agreements (NDAs) that prevented them from speaking out about the company's security practices.
Aschenbrenner said he did not sign such an NDA, but said he was offered about $1 million in equity.
In response to these growing concerns, a collective of nearly a dozen current and former OpenAI employees have meanwhile signed a open letter demanding the right to speak out against corporate wrongdoing without fear of reprisal.
The letter, endorsed by industry figures including Yoshua Bengio, Geoffrey Hinton and Stuart Russell, highlights the need for AI companies to commit to transparency and accountability.
“Until there is effective government oversight of these companies, current and former employees will be among the few people who can hold them accountable to the public. Yet broad confidentiality agreements prevent us from expressing our concerns except to companies that could do so. I am unable to resolve these issues,” the letter states. “Ordinary whistleblower protections are insufficient because they focus on illegal activities, while many of the risks we are concerned about are not yet regulated.
“Some of us reasonably fear various forms of retaliation, given the history of such cases in the industry,” he continues. “We are not the first to encounter or speak about these problems. »
After news of the restrictive employment covenants spread, Sam Altman claimed he was unaware of the situation and assured the public that his legal team was working to resolve the issue.
“There was a provision regarding possible equity write-off in our previous exit documents; even though we never recovered anything, this should never have been in any document or communication,” he tweeted. “It's my fault and one of the few times I've been really embarrassed to use OpenAI; I didn't know this was happening and I should have.
Regarding recent information on how openai manages equity:
we have never recouped anyone's acquired equity, nor will we if people do not sign a separation agreement (or agree to a non-disparagement agreement). Acquired equity is earned equity, period.
there was…
–Sam Altman (@sama) May 18, 2024
OpenAI says it has since released all employees from the disputed non-disparagement agreements and removed the clause from its departure documents.
OpenAI did not respond to a request for comment from Decrypt.
Generally intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.