AI Giants Unprepared for Risks, Warns Report; Safety Scores Highlight Industry-Wide Negligence
July 17, 2025
The report scores AI companies on safety measures, with Anthropic receiving the highest grade of C+, followed by OpenAI with a C and Google DeepMind with a C-, indicating widespread inadequacy in safety planning.
While advancements in AI promise benefits like better understanding of the human brain and neurological treatments, they also escalate risks that are not being sufficiently addressed.
This safety concern has been underscored following the AI safety summit in Paris, where experts called for global collaboration, emphasizing that safety measures are lagging behind technological progress.
Recent developments, especially in emotional intelligence, show that human-like AI capabilities are advancing rapidly without the necessary safety protocols in place.
Google DeepMind responded to the report by stating that it does not fully reflect their comprehensive safety efforts.
FLI co-founder Max Tegmark expressed urgent concern, comparing the current state of AI safety planning to building a nuclear power plant without a meltdown prevention plan, highlighting the critical need for better safety measures.
Innovations in AI architecture, such as increasing network complexity, may be bringing us closer to artificial general intelligence (AGI), which raises further safety concerns.
A recent report from the Future of Life Institute (FLI) warns that leading AI companies are 'fundamentally unprepared' for the risks associated with developing human-level AI, highlighting a significant industry-wide safety gap.
Despite ambitions to achieve AGI within the next decade, most companies lack coherent safety and control plans for these powerful systems.
An index evaluating major AI developers across six safety-related areas shows that none scored higher than a D, indicating widespread unpreparedness.
The report reveals that none of the seven leading AI labs, including OpenAI, Google DeepMind, and Anthropic, scored higher than a D on the 'existential safety' index, underscoring the industry's lack of readiness for potential catastrophic failures.
Summary based on 2 sources
Get a daily email with more World News stories
Sources

The Guardian • Jul 17, 2025
AI firms ‘unprepared’ for dangers of building human-level systems, report warns
City AM • Jul 17, 2025
AI giants ‘fundamentally unprepared’ for dangers of human level intelligence