Over 850 people just signed a letter asking for a global ban on superintelligence development. Not regulation. Not guidelines. A ban. And before you roll your eyes at "another alarmist AI letter," check the signature list: Steve Wozniak, Yoshua Bengio, Geoffrey Hinton—literal AI pioneers who built this stuff.
When the people who invented AI are saying "hey maybe we should pump the brakes," that hits different than random doomers on Twitter.
What They're Actually Asking For
The Future of Life Institute statement is pretty clear: no one should be building superintelligent AI until we have scientific consensus that it's safe AND broad public buy-in. Not one or the other—both.
Which sounds reasonable until you think about what that actually means. Scientific consensus on AI safety? We can't even get consensus on what "intelligence" means, let alone superintelligence. And "broad public buy-in"? Americans can't agree on anything. How are we supposed to agree on whether to build god-level AI?
But that might be exactly the point. Maybe they're saying the bar SHOULD be impossibly high because the stakes are literally existential.
My Weird Relationship With This
I'm conflicted about AI doomerism. Like, genuinely torn. On one hand, I use AI every single day. It helps me write, research, code, think through problems. The idea of banning further development feels like slamming the door on something genuinely helpful.
On the other hand... I saw the movie Her and thought "that's weird but charming." Then I watched people form actual emotional attachments to ChatGPT, and suddenly the movie felt less charming and more like foreshadowing.
There's this thing that happens where you're enjoying the benefits of technology right up until you realize you might have accidentally created something we can't control. And by then it's usually too late.
The "It's Already Too Late" Problem
Here's what keeps me up at night: even if everyone who signed this letter is 100% right about the risks, does it matter?
China isn't going to stop. Russia isn't going to stop. Some well-funded startup in a country with loose regulations isn't going to stop. So if the U.S. and allies ban superintelligence research, we just... give up the lead to whoever cares least about safety?
It's the classic coordination problem. Everyone's better off if no one builds the dangerous thing, but if you think someone else is going to build it anyway, you're incentivized to build it first. And around and around we go.
One of my favorite Twitter follows (who definitely thinks I'm naive) summed it up: "This letter is like asking everyone to agree not to invent nuclear weapons in 1943. Good idea! Way too late."
What Even IS Superintelligence?
Okay but real talk: we're not close to superintelligence, right? Like, ChatGPT is impressive, but it's not going to take over the world. It still can't reliably count letters in words.
Except... that's kind of the scary part? Because every time someone says "AI can't do X," it can do X like six months later. Remember when everyone said AI couldn't do creative work? Now it's generating art, music, writing. Remember when it couldn't code? Now it's shipping features.
The progress isn't linear. It's this weird exponential thing where it feels like nothing's happening, and then suddenly you turn around and AI can do stuff that seemed impossible last year.
Geoffrey Hinton—one of the signatories and literally one of the "godfathers of AI"—quit Google specifically to warn people about this. That's not just performative concern. That's "I helped build this and now I'm scared of what I built" concern.
The Timeline Nobody Agrees On
If you ask AI researchers when we'll have superintelligence, you get answers ranging from "never" to "next Tuesday." The lack of consensus is almost comical.
Some people think we're decades away. Some think we're years away. Some think we're already on the path and just don't realize it yet because the system is being deliberately careful (which, in itself, is kind of terrifying to think about).
I fall somewhere in the "probably not soon, but faster than feels comfortable" camp. Like, I don't think GPT-5 is going to become sentient and start manipulating stock markets. But GPT-8? GPT-10? When do we start taking the threat seriously?
The problem with existential risks is you only get one chance to take them seriously enough.
What I Actually Think We Should Do
Hot take: I don't think a blanket ban makes sense. But I do think we need something like what we did with nuclear weapons—international treaties, inspection regimes, verification protocols. The AI equivalent of "trust but verify."
Because here's the thing: the scientists signing this letter aren't anti-AI. They're not Luddites. They're people who understand exactly how powerful this technology could become, and they're asking for guardrails before we build something that can't be guardrailed.
That feels like the minimum responsible thing to do? Like, "let's make sure we can control this before we make it too smart to control" seems like basic engineering practice.
The Optimist vs Pessimist Thing
I go back and forth between "this is overblown panic" and "oh god we're all going to die."
Optimist me says: we've handled every other technological revolution. We'll figure this out too. Humans are adaptable. We'll build safeguards. We'll regulate it properly. Everything will be fine.
Pessimist me says: every other technological revolution was a tool we controlled. This one might become an entity that controls itself. That's qualitatively different. And we have no idea how to handle it.
Most days I'm somewhere in the middle. Concerned but not panicked. Cautious but not paralyzed. Hoping the smart people figure it out before we accidentally build something we can't unbuil.
Why This Letter Matters
Even if the ban never happens—and let's be honest, it probably won't—this letter matters because of who signed it. These aren't random people. These are the experts. The people who actually understand what's possible and what's at stake.
When 850 informed people say "hey, we should probably think about this more carefully," that's worth taking seriously. Even if—especially if—you disagree with their conclusion.
I don't know if superintelligence is decades away or years away. I don't know if it'll be aligned with human values or if it'll pursue goals we can't predict. I don't know if this letter will change anything or just be a historical footnote.
But I do know that having this conversation now, while we still can, beats having it later when it might be too late.
And if that makes me sound like a doomer... well, maybe it's okay to be a little bit of a doomer about stuff that could literally end civilization. Just a thought.