AI leaders warn of “catastrophic outcomes” as artificial general intelligence looms
- Artificial general intelligence (AGI) could arrive within a decade, bringing catastrophic outcomes like cyberattacks, autonomous weapons and existential threats to humanity, warns Google DeepMind CEO Demis Hassabis.
- Artificial intelligence (AI) is already being weaponized for cyberattacks on critical infrastructure (e.g., energy, water systems), deepfake disinformation, fraud and job displacement, with FBI warnings about AI-generated scams and political manipulation.
- Over 350 AI experts, including OpenAI’s Sam Altman and AI pioneers Yoshua Bengio and Geoffrey Hinton, signed a statement equating AI risks with pandemics and nuclear war, urging global prioritization of AI safety measures.
- DeepMind CEO Demis Hassabis calls for international AI governance akin to nuclear non-proliferation treaties, though geopolitical tensions complicate cooperation. Meanwhile, U.S. and China’s AI arms race outpaces regulatory efforts, risking uncontrolled escalation.
- While AI offers transformative benefits (efficiency, scientific breakthroughs), unchecked development risks AGI surpassing human control. The critical question: Will humanity enforce safeguards or allow AI to become an existential threat?
The rapid advancement of artificial intelligence (AI) has sparked both excitement and deep concern among industry leaders, with warnings that artificial general intelligence (AGI) – AI that matches or surpasses human cognitive abilities – could arrive within the next decade.
Google DeepMind CEO Demis Hassabis has cautioned that AGI could bring “catastrophic outcomes,” including cyberattacks on critical infrastructure, autonomous weapons and even existential threats to humanity. Speaking at the Axios AI+ Summit in San Francisco, Hassabis described AGI as a system exhibiting “all the cognitive capabilities” of humans, including creativity and reasoning.
However, he warned that current AI models remain “jagged intelligences” with gaps in long-term planning and continual learning. Still, he suggested AGI could become reality with “one or two more big breakthroughs.”
Hassabis emphasized that some AI dangers are already materializing, particularly in cybersecurity. “That’s probably almost already happening now… maybe not with very sophisticated AI yet,” he said, pointing to cyberattacks on energy and water systems as the “most obvious vulnerable vector.”
His concerns echo broader industry warnings. Over 350 AI experts, including OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei and AI pioneers Yoshua Bengio and Geoffrey Hinton, signed a statement from the Center for AI Safety declaring: “Mitigating the risk of extinction from AI should be prioritized globally alongside other societal-scale risks, such as pandemics and nuclear war.”
AI misuse: Deepfakes, job displacement and national security threats
Beyond infrastructure attacks, AI is already being weaponized for disinformation, fraud and deepfake manipulation. The Federal Bureau of Investigation has warned of AI-generated voice scams impersonating government officials, while deepfake pornography and political misinformation are proliferating.
BrightU.AI‘s Enoch notes that AI has emerged as a transformative technology, revolutionizing various sectors, from healthcare to finance. However, as with any powerful tool, AI’s potential for misuse and weaponization has raised significant concerns.
The decentralized engine notes that AI weaponization refers to the use of AI technologies to cause harm, gain an unfair advantage, or manipulate systems and people. This can manifest in various ways, including autonomous weapons, deepfakes and disinformation, social scoring and surveillance, AI-powered cyberattacks and the development of bioweapons.
Hassabis acknowledged that while AI could eliminate many jobs—particularly entry-level white-collar roles—he remains more concerned about malicious actors repurposing AI for destructive ends. “A bad actor could repurpose those same technologies for a harmful end,” he said.
A 2023 report commissioned by the U.S. Department of State concluded that AI could pose “catastrophic” national security risks, urging stricter controls. Yet, as nations like the U.S. and China race for AI dominance, regulation lags behind technological progress.
How likely can an AI catastrophe happen?
Among AI researchers, discussions often revolve around “P(doom)” – the probability of AI causing existential disaster. Hassabis assessed the risk as “non-zero,” meaning it cannot be dismissed. “It’s worth very seriously considering and mitigating against,” he said, warning that advanced AI systems could “jump the guardrail” if not properly controlled.
Hassabis advocates for an international agreement on AI safety, similar to nuclear non-proliferation treaties. “Obviously, it’s looking difficult at present day with the geopolitics as it is,” he admitted, but stressed that cooperation is essential to prevent misuse.
Meanwhile, tech giants continue pushing AI integration into daily life. Google envisions AI “agents” acting as personal assistants, handling tasks from scheduling to recommendations. Yet, Hassabis cautioned that society must adapt to AI-driven economic shifts, redistributing productivity gains equitably.
AI’s potential is undeniable – boosting efficiency, accelerating discoveries, and transforming industries. But its risks are equally profound. As Hassabis and other experts warn, without urgent safeguards, AGI could spiral beyond human control, with consequences rivaling pandemics and nuclear war.
Watch this video about AGI already being around for more than 20 years.
This video is from the TRUTH will set you FREE channel on Brighteon.com.
Sources include:
RT.com
BeforeItsNews.com
Axios.com
Edition.CNN.com
BrightU.ai
Brighteon.com
Read full article here

