AI Pioneer Geoffrey Hinton Warns: 10-20% Chance AI Could Outpace Human Control Within a Decade
April 28, 2025
Despite recognizing the benefits of AI in fields such as healthcare and education, Hinton criticized major tech companies, including OpenAI and Google, for prioritizing profit over safety.
Ultimately, Hinton believes that society is not fully aware of the potential dangers of AI, indicating a need for greater public understanding and proactive measures.
He expressed particular disappointment with Google for reversing its ethical stance on military applications of AI, indicating a shift that he finds troubling.
In a recent CBS Saturday morning interview, Hinton expressed his concerns regarding the unregulated and rapidly advancing AI industry, emphasizing that many people do not fully understand the associated risks.
Central to Hinton's worries is the concept of artificial general intelligence (AGI), which signifies the possibility of AI surpassing human intelligence and acting independently.
Geoffrey Hinton, a pivotal figure in artificial intelligence development, has raised alarms about the potential for AI to take control away from humans, estimating a 10 to 20 percent chance of this occurring.
Hinton advocates for government intervention to slow AI development, arguing that robust safety research is essential before further advancements are made.
Hinton highlighted the competitive nature of tech companies and nations as a barrier to preventing the creation of superintelligent systems, suggesting that global competition makes it unlikely to avoid this outcome.
He has revised his timeline for the emergence of superintelligent AI, now predicting it could be developed within the next decade, a significant acceleration from his previous estimates.
He warned that AI could empower hackers, making cyberattacks on critical infrastructure, such as banks and hospitals, more feasible.
Hinton likened the current state of AI development to raising a tiger cub, cautioning that without proper safeguards, AI could become dangerous as it matures.
He emphasized the urgency for global AI safety frameworks, stating that we are at a critical juncture that could lead to significant global transformation.
Summary based on 9 sources
Get a daily email with more Tech stories
Sources

Business Insider • Apr 28, 2025
'Godfather of AI' says humans would be powerless if AI seized control
TechRadar • Apr 28, 2025
The Godfather of AI is more worried than ever about the future of AI
Economic Times • Apr 29, 2025
'AI is like a cute tiger cub': Scientist who won Nobel for artificial intelligence has a scary warning for