A coalition of current and former employees of OpenAI, the parent company behind ChatGPT, has issued a warning regarding the existential threats posed by advanced artificial intelligenceincluding the potential for human extinction.
In a detail letter published yesterday (June 4), the group, made up of 13 former and current employees of companies such as OpenAI, Anthropicand that of Google deep mindhighlighted a range of threats associated with AI, despite its potential benefits.
The letter states: “We are current and former employees of pioneering AI companies, and we believe in the potential of AI technology to bring unprecedented benefits to humanity. ” However, it also highlights concerns: “These risks range from worsening existing inequalities, to manipulation and disinformation, to loss of control of autonomous AI systems that could lead to extinction of the 'humanity. »
Neel Nanda, head of mechanistic interpretability at DeepMind and formerly at AnthropicAI, was among the signatories. “This is NOT because I currently have anything that I want to warn my current or former employers about, nor any specific criticism of their attitude towards whistleblowers,” he wrote on X “But I think AGI will have incredibly far-reaching consequences and, as all labs recognize, could pose an existential threat. Any lab seeking to create AGI must prove it is worthy of the public's trust, and so is. Ensuring employees have a strong and protected right to whistleblower is a key first step.
I signed this call so that AI pioneering companies guarantee their employees the right to warn.
This is NOT because I currently have anything I want to warn my current or former employers about, or specific criticism of their attitude toward whistleblowers.https://t.co/hyEBuy3YDj
– Neel Nanda (@NeelNanda5) June 4, 2024
Lack of accountability and regulation of AI
Proponents state that although AI companies and governments around the world recognize these dangers, current corporate and regulatory measures are insufficient to prevent them. “AI companies have a strong financial incentive to avoid effective oversight, and we do not believe that tailored corporate governance structures are sufficient to change this situation,” they say.
He then criticizes the transparency of AI companies, saying they hold “substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different types of harm “. He highlights the lack of obligation for these companies to disclose such critical information, emphasizing: “They currently have only weak obligations to share some of this information with governments, and none with civil society. »
Workers expressed an urgent need for greater government oversight and public accountability. “Until there is effective government oversight of these companies, current and former employees will be among the few people who can hold them accountable to the public,” the group said. They also highlighted the limits of existing protections for whistleblowers, which do not fully cover the unregulated risks posed by AI technologies.
OpenAI in hot water
The open letter comes amid upheaval for major AI companies, particularly OpenAI, which has deployed AI assistants with advanced features capable of engaging in live voice conversations with humans and responding to visual input such as video feeds or written math problems.
Scarlett Johansson, who already played an AI assistant in the film “Her”, has accused OpenAI of modeling one of its products after his voice, despite his express refusal of such a proposal. Although OpenAI's CEO tweeted the word “she” during the voice assistant's launch, the company has since refuted claims that Johansson's voice was used as a model.
In May, OpenAI also disbanded a specialized team created to investigate long-term threats associated with AI, less than a year after its creation. Last July, OpenAI's head of trust and security, Dave Willner, also resigned.
Featured image: Canva