Hundreds of public figures—including some well-known technologists such as Apple co-founder Steve Wozniak and AI pioneers Geoff Hinton and Yoshua Bengio—have signed a “Statement on Superintelligence” calling for a global ban on developing advanced AI until there is both “broad scientific consensus” and “strong public buy-in.” The effort, backed by the Future of Life Institute, reprises the group’s 2023 call for a moratorium on systems beyond GPT-4. The intent is all-encompassing safety. But the effect would be widespread stagnation. Such a sweeping prohibition would delay innovation, empower bureaucracies, and hand an enormous advantage to China.
The proposal rests on the “precautionary principle”—the idea that we should act only once risks are fully understood. Better safe than sorry. The notion sounds reasonable but often paralyzes progress. After the 1979 Three Mile Island accident, fear-driven regulation froze US nuclear power for decades, denying Americans abundant, carbon-free energy. Europe’s biotech bans did the same for agricultural innovation. As political scientist Aaron Wildavsky once observed, “If you can do nothing without knowing first how it will turn out, you cannot do anything at all.”
AI skeptics are now trying to apply the same sketchy logic to a potentially transformative general-purpose technology—which, for the record, does not exist. But the statement’s definition of possible harm is so broad—encompassing “economic disempowerment,” “losses of dignity,” and even “human extinction”—that it effectively demands a permanent pause. No technology in history has ever been required to clear such an impossibly high bar. Demanding universal scientific and public consensus on questions of “dignity” or “control” regarding something still science fictional is a formula for permanent paralysis.
Markets, not moratoria, are the better safeguard. Companies that deploy unsafe or unreliable systems face reputational collapse, lawsuits, and consumer flight. In regulated industries such as finance and healthcare, oversight and liability amplify these pressures, creating a profitable race to the top in reliability. Decentralized competition also encourages experimentation and redundancy—the safety that comes from having many independent systems, approaches, and models operating in parallel rather than relying on a single point of failure.
As AI analyst Dean Ball persuasively argues, centralizing power over what the prohibitionists consider the world’s most dangerous technology—if it should ever happen—in the hands of states or a global institution with coercive authority would make it far less safe.
While Western signatories call for extreme caution, China wants to accelerate. Alibaba CEO Eddie Wu recently laid out a “roadmap toward artificial superintelligence” and pledged over $53 billion in AI investment. If superintelligence is possible, no lover of human freedom would want China to get there first.
Indeed, speed itself may be the safest route. Researchers Leopold Aschenbrenner and Philip Trammell argue in “Existential Risk and Growth” that faster innovation shortens humanity’s exposure to danger. As nations grow richer, they can afford stronger safeguards. Slowing progress, paradoxically, could trap us in the riskiest phase of development for far longer.
Faster, please!
