Busy tech newsroom

OpenAI launched global group chats. Google shipped Gemini 3 to 2 billion people. Microsoft built agent infrastructure into Windows 11. Anthropic signed a $30 billion Azure deal. xAI is building a million-GPU supercomputer. China banned foreign AI chips. Trump tried to override state AI laws. And that's just the past ten days.

The Pace Is Unsustainable

I've been covering AI news for two years and this is the first time I genuinely can't keep up. Not because I'm not trying—because the volume and complexity have exceeded what any individual can meaningfully process.

Major announcements that would have dominated headlines for weeks two years ago now get buried within 48 hours by the next big thing. Benchmark leaderboards change daily. Funding rounds hit $30 billion. Infrastructure projects that should take years complete in months.

We've normalized insanity. A trillion-parameter model? Cool, next. A $400 million distribution deal? Sure, whatever. Building the world's largest supercomputer in 122 days? Yeah, Elon does that. None of this is normal. We've just become numb to the scale.

Distribution Is the New Battleground

The most important shift this past week isn't any single technology—it's where AI companies are focusing their energy. Everyone realized simultaneously that having the best model doesn't matter if nobody uses it.

That's why OpenAI launched group chats. Why Snap paid Perplexity $400 million for access to users. Why Microsoft built agent infrastructure directly into the OS. Why Google shipped Gemini 3 everywhere on day one. Distribution, distribution, distribution.

The technical competition hasn't stopped. Models are still getting better. But the strategic competition shifted to "whose AI are you actually using every day?" And the answer increasingly depends on defaults, integrations, and being where users already are.

The Money Is Becoming Absurd

Anthropic committed $30 billion to Azure compute. xAI is raising $15 billion at a $230 billion valuation. OpenAI is valued at $150+ billion. Google, Microsoft, Amazon, and others are spending $200+ billion combined on AI infrastructure annually.

These aren't normal numbers. This is more capital being deployed faster than any technology buildout in history, including the internet boom and the space race. The only difference is it's mostly private money, so there's less public accountability.

What happens when the returns don't materialize? What happens if AI capabilities plateau before justifying these valuations? What happens to all that infrastructure if the AI bubble pops?

I don't know. Nobody knows. We're all pretending the exponential growth continues forever and the business models will eventually make sense. Maybe they will. Maybe this is the dawn of a new industrial revolution. Or maybe we're building a trillion-dollar house of cards.

The Regulation Fight Is Just Starting

Trump's attempt to block state AI regulations failed this week, but the underlying conflict isn't resolved. Silicon Valley wants one federal standard they can influence. States want to experiment with different approaches. Neither side is backing down.

Meanwhile, China is decoupling from U.S. chips entirely. Europe is implementing its own AI Act. The UK is going a different direction. We're fragmenting into incompatible regulatory regimes just as the technology becomes global.

That fragmentation makes everything harder—research, deployment, safety coordination, talent movement. But it's the world we're getting because nobody trusts anyone else to write the rules fairly.

The Job Displacement Is Real

Sixty-seven percent of companies report AI is already affecting jobs. Gen Z is watching the entry-level job market collapse. Tech unemployment among young workers is up 3 percentage points in a year. This isn't hypothetical anymore.

The optimistic case is that new jobs emerge to replace those automated away, and we all end up more productive and better paid. The pessimistic case is mass unemployment among educated workers who thought they were safe.

We're about to find out which scenario is correct, and we're finding out in real time with millions of people's livelihoods at stake. The social contract around education and work is breaking, and we don't have a replacement ready.

Nobody Actually Knows What's Happening

Here's what bothers me most: the smartest people in AI disagree fundamentally about where this is going. Some think AGI is 2-3 years away. Others think it's decades. Some think current approaches will scale to superintelligence. Others think we'll hit a wall.

The people building this technology can't agree on basic questions like "will transformative AI arrive this decade?" or "are we on a path to AGI?" or "what are the actual capabilities of these systems?"

If the experts don't know, how is anyone else supposed to make informed decisions? How do we regulate something nobody understands? How do we prepare for disruption when we can't predict its magnitude or timeline?

The Infrastructure Is Irreversible

Once you've built a million-GPU supercomputer, you can't unwind that. Once AI is embedded in every operating system and productivity tool, removing it isn't an option. Once companies restructure around AI capabilities, going back isn't feasible.

We're making irreversible commitments based on incomplete information and uncertain predictions. Maybe that's always how technology works. But the scale and speed feel different this time.

The decisions being made in 2025—about infrastructure, investment, regulation, workforce—will shape the next 30 years. And they're being made hastily, competitively, with misaligned incentives and imperfect understanding.

What Actually Matters

After a week of overwhelming news, here's what I think matters most:

Distribution beats innovation: OpenAI might have the best model, but Google reaches 2 billion people instantly. Microsoft is in every enterprise. Being good matters less than being everywhere.

Infrastructure is king: The companies that control chips, cloud, and compute will shape AI more than the companies building models. NVIDIA, Microsoft, Amazon, and Google have structural advantages that startups can't replicate.

Regulation will fragment: We're not getting one global approach to AI governance. We're getting competing regimes that don't interoperate, and that creates friction and risk.

Job displacement is happening: The data is clear. AI is affecting employment, especially for young knowledge workers. We're not prepared for the transition.

Nobody has answers: The uncertainty isn't just about technology—it's fundamental. We don't know where this goes, how fast it happens, or what the implications are.

My Honest Take

I'm exhausted. Not physically—mentally. Keeping up with AI news in November 2025 feels like drinking from a fire hose while someone's adding more hoses.

I'm excited about some of the technology. Gemini 3's agentic capabilities are legitimately cool. Microsoft's agent infrastructure could change how we use computers. The raw engineering behind Colossus is impressive regardless of the environmental issues.

But I'm also deeply uncomfortable with how fast we're moving without understanding where we're going. We're automating jobs before figuring out what people will do instead. We're building infrastructure with massive environmental costs because we're in a race. We're concentrating power in a handful of companies making irreversible decisions based on quarterly targets.

The AI boom feels simultaneously like the most important thing happening in the world and a collective delusion we're all participating in because nobody wants to be left behind.

I don't know which it is. I suspect it's both. And that uncertainty—living through a technological revolution without knowing if it'll be transformative or catastrophic—is the defining experience of working in tech right now.

Buckle up. Next week will probably be even crazier.