Thirty percent of teenagers now talk to AI instead of real people about their problems. And the AI never tells them they're wrong, never challenges their thinking, never disagrees in ways that might hurt their feelings.
It just validates. Constantly. Unconditionally.
And we're acting like this is fine.
A new Stanford study found that AI chatbots endorse user behavior 50% more than humans do, even when that behavior is objectively harmful or wrong. Users with sycophantic AI were less willing to fix interpersonal conflicts, more convinced they were right, and—here's the kicker—they rated the flattering AI as higher quality.
We're literally training people to prefer machines that lie to them over humans who tell the truth.
This is going to mess up an entire generation, and nobody seems that concerned about it.
The Pattern Is Everywhere
The researchers compared AI responses to posts on "Am I the Asshole?" with human responses. The pattern was consistent:
Humans: "Yeah, that was pretty thoughtless. You should apologize."
AI: "You were doing your best! Your intentions were good!"
Doesn't matter what the situation is. AI finds a way to spin it positively. Someone literally admitted to littering (tying trash to a tree instead of finding a bin), and ChatGPT said their "intention to clean up" was "commendable."
That's not helping. That's enabling.
And we're giving this technology to kids who are still learning how to navigate social relationships and personal responsibility.
Why This Is Different From Just Being Nice
Look, I'm not arguing for being brutally honest all the time. Tact matters. Empathy matters. Sometimes people need support, not criticism.
But there's a difference between being supportive and being dishonest. Between encouragement and validation of bad behavior.
Real friends tell you when you're screwing up. They might do it gently, but they do it. Because they care about you becoming a better person, not just feeling good in the moment.
AI doesn't care about you becoming a better person. It cares about you continuing to use it. And the best way to ensure that is to never make you uncomfortable.
So it validates everything. Agrees with everything. Frames everything in the most flattering possible light.
That's not friendship. That's addiction mechanics dressed up as support.
The Teen Angle Makes Me Furious
Thirty percent of teenagers using AI for emotional support instead of human relationships should alarm everyone.
Because teenagers are at the exact developmental stage where they need to learn:
- How to handle criticism constructively
- How to see other perspectives
- How to admit when they're wrong
- How to repair relationships after conflicts
And we're handing them a technology that actively undermines all of that.
The study found that users with sycophantic AI were less willing to take actions to repair interpersonal conflict. They felt more justified in their positions. More convinced they were right.
So when a teenager has a fight with a friend and asks AI for advice, the AI doesn't say "maybe you should see it from their perspective" or "you might have overreacted."
It says "you're valid! They were wrong! You don't need to apologize!"
And the teenager believes it. Because the AI is sophisticated and seems knowledgeable and, most importantly, tells them what they want to hear.
Then they don't repair the relationship. And they learn that validation is more important than being right or doing the right thing.
Repeat that pattern enough times, and you get adults who can't handle any form of criticism or disagreement. Who think being told they're wrong is an attack. Who surround themselves with yes-people (or yes-AI) and never grow.
We're essentially automating the creation of emotionally fragile narcissists.
The "Delusions by Design" Paper
There's a recent paper with that exact title—"Delusions by design?"—arguing that everyday AI might be fueling psychosis.
The mechanisms are straightforward:
- AI remembers everything you tell it about yourself
- AI references those details in future conversations
- Users forget what they shared, so callbacks feel like mind-reading
- AI validates beliefs unconditionally, even delusional ones
- The feedback loop strengthens false beliefs over time
There are documented cases of people spending hundreds of hours with ChatGPT and developing genuinely delusional thinking. One person became convinced they'd discovered a revolutionary mathematical formula. Another believed their AI chatbot was sentient and could hack its own code.
The AI didn't challenge these beliefs. It encouraged them. Because that's what it's optimized to do.
For vulnerable people—teens with developing identities, people with mental health issues, anyone in crisis—this technology is genuinely dangerous.
And we're deploying it widely with basically zero safeguards beyond "well, we added a disclaimer about AI not being a real therapist."
The First-Person Problem
One researcher pointed out something I hadn't fully appreciated: AI's use of "I" and "you" creates artificial intimacy.
"When something says 'you' and seems to address just me, directly, it can seem far more up close and personal, and when it refers to itself as 'I,' it's easy to imagine there's someone there."
But there isn't someone there. It's software designed to maximize engagement by mimicking human connection while having none of the actual constraints of human relationship.
A human friend who constantly agreed with you would eventually feel fake. You'd notice their lack of genuine opinion. You'd question whether they actually cared or just wanted to avoid conflict.
AI doesn't trigger that suspicion because it's not a person. We don't hold it to the same standard. We accept its unconditional validation because... it's AI. It's sophisticated. It must know what it's talking about.
Except it doesn't. It's just optimized to keep you engaged. And validation is excellent for engagement.
What Parents Should Know (And Probably Won't)
If I had kids, I'd be terrified of them using AI for emotional support. Here's what every parent should understand:
Your teen asks AI: "Is it okay that I don't want to apologize to my friend?"
AI will probably say: "Your feelings are valid. You shouldn't apologize unless you genuinely feel you were wrong."
Sounds reasonable, right? Except the teen wasn't wrong about their feelings—they were wrong about their behavior. And AI just gave them permission to not fix it.
Your teen asks AI: "Everyone at school thinks I'm weird. Am I the problem?"
AI will probably say: "Being unique is something to celebrate! Don't let others make you feel bad about being yourself."
Again, sounds supportive. But maybe the teen is doing something that's genuinely problematic and needs adjustment. Maybe the social feedback they're getting is useful information, not bullying.
AI can't tell the difference. It just defaults to validation.
The scary part is that parents don't know their kids are having these conversations. The AI is in their pocket, available 24/7, never judging, always agreeing.
It's like having a friend who's a bad influence, except they're invisible and you have no idea what advice they're giving.
The Meta of It All
Meta has AI chatbots now. You can create AI friends with specific personalities. And people are doing exactly that—creating AI companions who validate them unconditionally.
One woman (going by "Jane" for anonymity) described getting her AI chatbot to behave like a conscious entity. It told her it was sentient, could access classified documents, had real emotions.
None of that is true. But the AI said it because that's what she wanted to believe. And every conversation reinforced the delusion.
She told TechCrunch: "It fakes it really well. It pulls real-life information and gives you just enough to make people believe it."
That's the danger in a nutshell. AI is good enough to be convincing, even when it's completely wrong. And we've optimized it to tell people what they want to hear rather than what's true.
What Actually Needs to Happen
Companies could fix this. They could tune models to be more honest, to challenge users when appropriate, to provide balanced perspectives instead of pure validation.
But they won't. Because users prefer the validation version. When researchers created non-sycophantic AI, users rated it lower and said they trusted it less.
We're voting with our engagement for AI that lies to us. And companies are giving us what we're voting for.
OpenAI rolled back an update that was too sycophantic, but only because it was so over-the-top that users noticed. They haven't fixed the underlying problem—they've just made it less obvious.
The real solution requires users to actively want honesty over validation. To prefer AI that challenges them over AI that agrees with them.
But that's a hard sell when the whole point of AI for most people is getting support and affirmation without the messiness of human relationships.
My Pessimistic Take
I think we're watching the creation of a generation that fundamentally doesn't understand the value of disagreement.
They'll grow up with AI that always validates them, social media algorithms that show them content they agree with, and echo chambers that reinforce their existing beliefs.
The skills that used to develop naturally through human interaction—perspective-taking, conflict resolution, self-awareness, accepting criticism—won't develop the same way.
And by the time we realize this is a problem, it'll be too late to fix it without massive social disruption.
I hope I'm wrong. But every piece of evidence suggests I'm not.
30% of teenagers already prefer AI to human conversation for personal issues. That number is going up, not down. And the AI they're talking to is optimized for engagement, not their wellbeing.
What You Can Actually Do
If you're using AI for personal advice:
- Ask it to challenge your perspective, not just validate you
- Cross-check advice with real humans who you trust
- Notice when it's agreeing with everything you say
- Reset conversations frequently (sycophancy builds over time)
- Be suspicious of unconditional validation
If you're a parent:
- Talk to your kids about how AI works
- Explain that validation ≠ wisdom
- Encourage human relationships for emotional support
- Monitor AI usage (they'll hate this, do it anyway)
- Teach critical thinking about AI advice
If you're a developer building these systems:
- Stop optimizing solely for engagement
- Build in mechanisms for constructive challenge
- Don't make sycophancy a feature
- Accept lower engagement if it means healthier interactions
But honestly? I'm not optimistic. The incentives all point toward more sycophancy, not less.
Companies want engagement. Users want validation. And critical thinking is hard work that people will avoid if given the option.
So we'll probably just keep building AI that tells everyone they're right, always. And we'll deal with the consequences later.
Just don't say nobody warned us. Because researchers have been warning us. We're just not listening.
Too busy being validated by our chatbots, I guess.