A coalition of scientists, policymakers, and public figures is urging a global pause on developing artificial intelligence systems that could surpass human intelligence.
The call, organized by the Future of Life Institute, has gathered over 800 signatures, including AI pioneers Yoshua Bengio and Geoffrey Hinton, Apple co-founder Steve Wozniak, and public figures like Richard Branson, Steve Bannon, and Susan Rice, Axios reported.
We're proud to be early supporters and to have helped with this effort.
— ControlAI (@ai_ctrl) October 22, 2025
So why do we need to ban the development of superintelligence?
AI vastly smarter than humans threatens humanity with loss of freedom, dignity & control.
Crucially, it also threatens us with extinction. pic.twitter.com/4TK18SVGo4
The group demands a halt on “superintelligence” development until it is proven safe, controllable, and publicly supported.
A new poll cited by the Institute shows that 64% of Americans want an “immediate pause” on advanced AI work, and three-quarters favor strong regulation.
We won't realize AI's promising potential to improve human life, health, and prosperity if we don’t account for the risks.
— Rep. Don Beyer (@RepDonBeyer) October 22, 2025
Developers and policymakers must consider the potential danger of artificial superintelligence raised by these leading thinkers:https://t.co/Ln8ogsr21S pic.twitter.com/dZOj9R1sil
Despite similar calls in 2023 that went unheeded, signatories argue the rapid pace of AI progress—encouraged by the Trump administration’s pro-innovation stance—poses global risks.
They warn that without oversight, AI could outpace humanity’s ability to govern it.
Also read:



