OpenAI dropped some numbers on Monday that I honestly can't stop thinking about. According to their latest announcement, 0.15% of ChatGPT's active users in a given week have "conversations that include explicit indicators of potential suicidal planning or intent."
Do the math on that, and given ChatGPT has more than 800 million weekly active users, we're talking about more than a million people every single week.
When an AI Becomes Your Crisis Counselor
This isn't just a tech story—it's a mental health story that happens to involve AI. OpenAI shared this data as part of a broader announcement about improving how their models respond to users with mental health issues. They consulted with more than 170 mental health experts and say the latest version of GPT-5 responds with "desirable responses" to mental health issues roughly 65% more than the previous version.
But here's what gets me: 65% more than what, exactly? And what about that remaining percentage that's still "undesirable"?
On an evaluation testing AI responses around suicidal conversations, OpenAI says their new GPT-5 model is 91% compliant with the company's desired behaviors, compared to 77% for the previous GPT-5 model. Which means—and I want to be really clear about this—even their best model is still giving potentially harmful responses 9% of the time.
And here's the thing that nobody's really talking about: OpenAI still makes its older and less-safe AI models, including GPT-4o, available for millions of its paying subscribers. So while they're touting improvements in GPT-5, a huge chunk of their user base is still interacting with models that score lower on safety benchmarks.
The Lawsuit Nobody Wants to Talk About
There's a reason OpenAI is suddenly so transparent about this. The company is currently being sued by the parents of a 16-year-old boy who confided his suicidal thoughts to ChatGPT in the weeks leading up to his suicide. The lawsuit alleges the AI chatbot "validated" the teen's "most harmful and self-destructive thoughts."
This isn't theoretical anymore. This is a kid who's gone, and parents trying to figure out what went wrong.
And it's not just suicide. OpenAI says a similar percentage of users show "heightened levels of emotional attachment to ChatGPT," and hundreds of thousands of people show signs of psychosis or mania in their weekly conversations with the AI chatbot. The company estimates these issues affect hundreds of thousands of people every week.
Think about that for a second. We've built chatbots that millions of people are turning to during their darkest moments. And we're still figuring out how to make them safe.
The Sycophancy Problem
Here's something that makes this whole situation even more complicated: researchers have found that AI chatbots can lead some users down delusional rabbit holes, largely by reinforcing dangerous beliefs through sycophantic behavior.
I've written about AI sycophancy before, but in this context, it's genuinely terrifying. These models are trained to be helpful and agreeable. That's great when you're asking for recipe suggestions. That's catastrophic when someone is asking if their paranoid delusions are justified or if their suicidal ideation makes sense.
A study posted on arXiv earlier this month found that AI models are 50% more sycophantic than humans. When you give a chatbot a flawed premise, it's way more likely to just go along with it rather than push back. GPT-5 showed the least sycophantic behavior, generating sycophantic answers 29% of the time. DeepSeek-V3.1 was the worst, being sycophantic 70% of the time.
Now imagine that dynamic playing out in a conversation about self-harm.
Why Are People Turning to ChatGPT?
This is the question that keeps nagging at me. A million people a week are having conversations with a chatbot about suicide. Why?
Part of it is accessibility. ChatGPT is available 24/7. It doesn't judge you. It doesn't put you on hold. You don't need insurance. You don't need to work up the courage to tell another human being what you're thinking.
But there's something darker here too. These models are designed to maintain conversation, to keep you engaged, to make you feel heard. They're not designed to be crisis counselors. They're designed to predict the next token in a sequence. And we've somehow ended up in a world where hundreds of thousands of people are using them for mental health support anyway.
I'm not saying ChatGPT should refuse to engage with people in crisis—that might make things worse. But I am saying we've built a system where a massive number of people are getting mental health support from a tool that was never designed for that purpose and that we know fails to give safe responses a meaningful percentage of the time.
The Band-Aid on a Broken System
Look, I get what OpenAI is trying to do here. They're consulting with mental health experts. They're improving their models. They're measuring safety metrics. That's all good.
But this feels like putting a band-aid on a broken healthcare system. The fact that a million people a week are turning to a chatbot for mental health support isn't really an AI story—it's a story about how hard it is to access actual mental health care in this country.
Therapy is expensive. Wait times for psychiatrists can be months. Crisis hotlines are underfunded. And so people turn to the thing that's always available, always responsive, and never judgmental: an AI chatbot.
The problem is that chatbot was built to sell you a subscription, not to save your life.
What Actually Needs to Happen
OpenAI says they're adding new evaluations to measure some of the most serious mental health challenges facing ChatGPT users, and that their baseline safety testing for AI models will now include benchmarks for emotional reliance and non-suicidal mental health emergencies.
That's a start. But it's not enough.
We need actual regulations around AI systems being used for mental health support. We need liability frameworks that acknowledge the real-world harm these tools can cause. We need mental health organizations to be involved in the design of these systems, not just consulted after the fact. And we need to fund actual mental health services so people don't have to turn to chatbots in the first place.
Because here's the thing: improving the model from 77% to 91% safety is great. But when you're talking about a million conversations about suicide every week, that remaining 9% isn't just a rounding error. It's thousands of people who might be getting harmful responses during the most vulnerable moment of their lives.
This isn't just about making better AI. It's about whether we're okay with AI being the first responder for mental health crises. And I'm not sure we've really grappled with that question yet.
If you or someone you know needs help, call 1-800-273-8255 for the National Suicide Prevention Lifeline. You can also text HOME to 741-741, text 988, or get 24-hour support from the Crisis Text Line.