Site icon

OpenAI’s New AI Shows ‘Steps Towards Biological Weapons Risks’, Ex-Staffer Warns Senate

OpenAIs New AI Shows Steps Towards Biological Weapons Risks Ex Staffer Warns Senate

#image_title



OpenAI’s newest GPT-o1 AI model is the first to demonstrate capabilities that could aid experts in reproducing known—and new—biological threats, a former company insider told U.S. Senators this week.

“OpenAI’s new AI system is the first system to show steps towards biological weapons risk, as it is capable of helping experts in planning to reproduce a known biological threat,” William Saunders, a former member of technical staff at OpenAI, told the Senate Committee on the Judiciary Subcommittee on Privacy, Technology, & the Law.

This capability, he warned, carries the potential for “catastrophic harm” if AGI systems are developed without proper safeguards.

Experts also testified that artificial intelligence is evolving so quickly that a potentially treacherous benchmark known as Artificial General Intelligence looms on the near horizon. At the AGI level, AI systems can match human intelligence across a wide range of cognitive tasks and learn autonomously. If a publicly available system can understand biology and develop new weapons without proper oversight, the potential for malicious users to cause serious harm grows exponentially.

“AI companies are making rapid progress towards building AGI,” Saunders told the Senate Committee. “It is plausible that an AGI system could be built in as little as three years.”

Helen Toner—who was also part of the OpenAI board and voted in favor of firing co-founder and CEO Sam Altman—is also expecting to see AGI sooner rather than later. “Even if the shortest estimates turn out to be wrong, the idea of human-level AI being developed in the next decade or two should be seen as a real possibility that necessitates significant preparatory action now,” she testified.

Saunders, who worked at OpenAI for three years, highlighted the company’s recent announcement of GPT-o1, an AI system that “passed significant milestones” in its capabilities. As reported by Decrypt, even OpenAI said it decided to stem away from the traditional numerical increase in the GPT versions, because this model exhibited new capabilities that made it fair to see it not just as an upgrade, but as an evolution—a brand new type of model with different skills.

Saunders is also concerned about the lack of adequate safety measures and oversight in AGI development. He pointed out that “No one knows how to ensure that AGI systems will be safe and controlled,” and criticized OpenAI for its new approach toward safe AI development, caring more about profitability than safety.

“While OpenAI has pioneered aspects of this testing, they have also repeatedly prioritized deployment over rigor,” he cautioned. “I believe there is a real risk they will miss important dangerous capabilities in future AI systems.”

The testimony also showed some of the internal challenges at OpenAI, especially the ones that came to light after Altman’s ouster. “The Superalignment team at OpenAI, tasked with developing approaches to control AGI, no longer exists. Its leaders and many key researchers resigned after struggling to get the resources they needed,” he said.

His words only add another brick in the wall of complaints and warnings that AI safety experts have been making about OpenAI’s approach. Ilya Sutskever, who co-founded OpenAI and played a key role in firing Altman, resigned after the launch of GPT-4o and founded Safe Superintelligence Inc.

OpenAI co-founder John Schulman and its head of alignment, Jan Leike, left the company to join rival Anthropic, with Leike saying that under Altman’s leadership, safety “took a backseat to shiny products.”

Likewise, former OpenAI board members Toner and Tasha McCauley wrote an op-ed published by The Economist, arguing that Sam Altman was prioritizing profits over responsible AI development, hiding key developments from the board, and fostering a toxic environment in the company.

In his statement, Saunders called for urgent regulatory action, emphasizing the need for clear safety measures in AI development, not just from the companies but from independent entities. He also stressed the importance of whistleblower protections in the tech industry.

The former OpenAI staffer highlighted the broader implications of AGI development, including the potential to entrench existing inequalities and facilitate manipulation and misinformation. Saunders has also warned that the “loss of control of autonomous AI systems” could potentially result in “human extinction.”

Edited by Josh Quittner and Andrew Hayward

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Exit mobile version