AI Safety in Crisis: Ex-OpenAI Researcher Warns of Looming Dangers Amid Global Tech Race
January 29, 2025
Steven Adler, a former safety researcher at OpenAI, has raised significant concerns about the rapid advancements in artificial intelligence (AI) and the ongoing race towards achieving Artificial General Intelligence (AGI).
The competitive landscape in AI has intensified, particularly with the emergence of DeepSeek, a Chinese startup that has developed an AI model comparable to those in the U.S. at a significantly lower cost, alarming U.S. investors.
He emphasized the urgent need for transparency and robust safety regulations within the AI industry, reflecting a growing consensus among experts on the necessity of cooperative oversight to ensure safe AI development.
He also highlighted that while AGI has the potential to benefit humanity, it poses significant dangers if not handled responsibly, echoing concerns raised by other prominent figures in the field.
His departure reflects broader frustrations within OpenAI, where nearly half of the team focused on long-term AI risks has left, raising alarms about the company's safety culture.
The ethical implications of AI research have been further underscored by tragic events, including the death of former OpenAI researcher Suchir Balaji, raising questions about the industry's practices.
In response to this heightened competition, OpenAI's CEO Sam Altman described the situation as invigorating and indicated that the company would accelerate its product releases to maintain its edge.
Adler warned that even well-intentioned AI labs might feel pressured to cut corners to stay competitive, which could lead to disastrous outcomes.
Adler's resignation from OpenAI coincided with increasing scrutiny of the AI sector, particularly following the announcement of DeepSeek's competing AI model.
After leaving OpenAI, Adler is taking a break but remains engaged in discussions about AI safety, inviting input on overlooked ideas related to safety policy and control methods.
Concerns have been voiced by AI researchers regarding the potential for AGI or superintelligence to exceed human control, with surveys indicating a notable risk of existential catastrophe stemming from AGI.
During his tenure at OpenAI, Adler contributed to discussions on AI safety and ethics, overseeing safety research and programs linked to product launches and long-term AI projects.
Summary based on 9 sources
Get a daily email with more AI stories
Sources

Times Of India • Jan 30, 2025
ChatGPT maker OpenAI's former researcher shares AI fears: I'm pretty terrified by ...
Hindustan Times • Jan 30, 2025
‘Pretty terrified by the pace of AI development’: OpenAI researcher quits, claims labs are taking ‘very risky gamble’
