Over 850 public figures—including Nobel laureates, royals, and AI pioneers—signed a Future of Life Institute statement on October 22, 2025, calling for a global superintelligence ban until science proves it safe and public buy-in is secured.
That's a lot of extremely smart, credentialed people saying "maybe we should pump the brakes here." And unlike most AI doom-posting, this one made me actually stop and think.
Who's Calling for This
This isn't random Twitter users or conspiracy theorists. The signatories include Nobel Prize winners, people with actual AI research credentials, and even some folks who've been building AI systems. These are people who understand the technology deeply and are still saying "this might be a really bad idea."
That's different from tech critics who don't understand how LLMs work. These people understand exactly how they work, and that's why they're worried.
What They're Actually Asking For
The key phrase is "until science proves it safe and public buy-in is secured." They're not saying "ban all AI forever." They're saying "maybe we shouldn't race toward superintelligent AI without knowing whether we can control it."
Which, when you put it that way, sounds almost... reasonable?
The Counterargument (Which I Also Understand)
The counterargument is that you can't "ban" AI development. It's not like nuclear weapons where you need a massive industrial complex and rare materials. AI research happens on consumer hardware. It's happening in dozens of countries simultaneously. A ban would just mean the most cautious, safety-conscious researchers stop while others continue.
Plus, there's the "whoever gets there first wins" mentality. If the US slows down on AI development, does China? Does anyone? Or do we just ensure that the first superintelligent AI is built by whoever cares least about safety?
My Uncomfortable Middle Ground
Here's where I land, and I hate that I don't have a cleaner take: both sides are right about different things.
The signatories are right that we don't know if we can control a superintelligent AI. We can barely control current LLMs—they do weird, unexpected things constantly. Scaling that up to something dramatically more intelligent than humans seems... bad? Possibly very bad?
But the tech companies are also right that unilateral slowdowns don't work. And AI research does have genuine benefits. And there's no clear line where we'd know we've "proven it safe enough."
What Actually Worries Me
What worries me isn't superintelligent AI specifically. What worries me is that we're having this debate now, after billions have been invested, after entire companies have been built around the assumption that we'll just keep scaling up AI capabilities indefinitely.
We should have had this conversation five years ago. Or ten. Now we're committed to a path, and changing course means massive economic disruption and companies going out of business and people losing jobs. So we probably won't change course, even if we should.
The Question Nobody Wants to Ask
If 850 extremely qualified people are saying "this might end badly," and we collectively decide to ignore them because stopping would be too economically painful... what does that say about us?
I don't have a good answer to that. I'm not even sure there is one.
Where Do We Go From Here?
I don't think we're getting a global ban on superintelligent AI development. Too many economic incentives, too many national security concerns, too many people who think they can be the ones to build it safely.
But maybe—maybe—we can at least slow down enough to think about what we're building. Maybe we can have more international coordination. Maybe we can invest more in AI safety research alongside AI capabilities research.
Or maybe we'll just keep racing forward because that's what humans do, and then we'll find out whether the worried Nobel laureates were right or just being overcautious.
I genuinely don't know which outcome I'm betting on. Check back with me in a few years—assuming we're all still here and haven't created something we couldn't control.
Fun times in tech, everybody. Fun times.