Something shifted in AI this past week. Not in the models themselves—those keep getting incrementally better—but in how they're reaching users. Three announcements in five days revealed a pattern that most people missed: distribution is becoming the real battleground.
The Pattern
OpenAI launched group chats in ChatGPT, letting up to 20 people collaborate with AI in shared conversations. Microsoft built native agent infrastructure directly into Windows 11, with agents running in their own contained workspaces. Snap took $400 million from Perplexity to embed AI search into Snapchat's messaging interface.
On the surface, these look unrelated. But they're all solving the same problem: how do you get AI in front of users without requiring them to change their behavior?
The Distribution Problem
Here's the thing about AI that nobody wants to admit: most people don't actually want to use AI. They want their problems solved. They want tasks completed. They want information found quickly. The AI part is incidental.
That's why ChatGPT's initial viral success was surprising—it was actually compelling enough that people changed their workflow to use it. But that's the exception. Most AI tools require you to open a new app, learn a new interface, and remember to use them when relevant.
The next phase is about removing that friction. Put AI where people already are, embedded in the tools they already use, responding in the contexts where they already work.
Where People Already Are
For hundreds of millions of people, "where they already are" is group chats. That's why OpenAI's group chat feature matters more than it looks. It's not about making ChatGPT social—it's about inserting AI into the coordination layer where people already make decisions, plan events, and work through problems together.
Someone I know who works at a startup said their team immediately started using ChatGPT group chats for brainstorming sessions. The AI becomes a participant, not a separate tool they consult. That's a meaningful shift in how AI integrates into workflows.
Microsoft's bet is similar but broader: make AI agents native to the operating system itself. Instead of AI living in apps, apps live in an environment designed for agentic workflows. Agents get their own workspace, their own identity, their own runtime. They operate in parallel with your normal desktop session.
This is infrastructure, not features. Microsoft isn't building one AI assistant—they're building the platform that lets thousands of agents run on Windows with proper security, policy control, and user oversight.
The Economics of Distribution
The Snap-Perplexity deal crystallizes the economics. Perplexity paid $400 million for embedded access to Snapchat's user base. That's a premium on distribution, not technology. Perplexity has good search tech, but so do others. What they don't have is 943 million monthly users.
This flips the traditional model. Usually platforms charge AI companies for API access. Now AI companies are paying platforms for user attention. That only makes sense in a world where getting in front of users is harder than building good AI.
We're entering that world. The technical bar to build competent AI keeps dropping—open models are catching up to closed ones, training costs are falling, and the algorithmic innovations are getting commoditized quickly. But attention and distribution don't scale the same way.
Where This Goes
I think we're about to see a wave of these deals. Discord, Telegram, Slack, Microsoft Teams, Notion, Linear—any platform with an engaged user base doing knowledge work becomes valuable real estate for AI companies.
The platforms that figure out how to embed multiple AI providers without making their products terrible will win. That's harder than it sounds. Too many AIs competing for attention creates decision paralysis. Too few creates lock-in and reduces quality.
Microsoft's approach with agent connectors and the Model Context Protocol is interesting because it tries to standardize how agents interact with the OS and apps. If that becomes the standard, Windows could become the neutral platform where any AI agent can operate, competing on capability rather than distribution.
The Attention Economy Thesis
This all ties back to attention being the scarce resource. Claude, ChatGPT, Gemini, Perplexity—they're all competing for the same thing: being the AI you interact with most often.
The companies winning that battle won't necessarily have the best models. They'll have the best distribution. That might be through exclusive platform deals (like Perplexity-Snap), tight product integration (like Copilot in Microsoft 365), or being embedded in the OS itself (like Microsoft's agent infrastructure).
Google has search as distribution. Apple is about to use Gemini to power Siri. Meta has WhatsApp and Instagram. OpenAI has... a popular website and some partnerships. That asymmetry explains why OpenAI is pushing group chats and building social features in Sora. They need more surfaces.
What Developers Should Watch
For people building AI products, the strategic question changes. It's no longer just "how good is my model?" It's "how do I reach users where they already are?"
That might mean integrating with platforms as third-party agents using standards like MCP. It might mean partnering with messaging apps or productivity tools. It might mean building so vertically that you own the full stack from AI to distribution.
But competing on model quality alone, without distribution strategy, gets harder every month.
The Uncomfortable Truth
The uncomfortable truth is that AI might follow the same path as web browsers or social networks: technically commoditized but economically concentrated. The models become good enough that differences don't matter much to average users. Distribution and defaults determine winners.
That's not the future the AI research community hoped for. But it might be the future we're getting.
The distribution wars just started, and they'll determine which AI companies exist in five years more than benchmark performance will.