Public figures across technology, academia, and policy have signed a new letter from the Future of Life Institute urging world governments to halt artificial superintelligence (ASI) development until it is demonstrably controllable and has earned public approval. The statement warns of existential and societal risks tied to uncontrolled frontier AI research and calls for a formal international moratorium until safety, oversight, and democratic consent are achieved. Full statement: Future of Life Institute – Superintelligence Statement.
Why it matters: This letter marks another flashpoint in the growing divide between AI accelerationists and those advocating for caution. While it echoes previous calls for regulation, the message lands at a time when public anxiety over superintelligence is rising. Critics note that the world’s leading labs—OpenAI, Google DeepMind, and Anthropic—remain absent from the effort, suggesting that the open question of how to define or enforce an “ASI pause” remains unresolved. Still, with leading pioneers and major public figures aligning behind this statement, it adds fresh momentum to the debate over whether humanity can—or should—build systems smarter than itself.