Apple just admitted what we've all known for years: Siri is broken. The fix? Pay Google roughly $1 billion annually to power the next version with Gemini AI. Yes, you read that right. The company that built its entire brand on doing everything in-house is outsourcing its voice assistant to its biggest competitor.
The Deal Nobody Saw Coming
According to Bloomberg, Apple spent months testing AI models from OpenAI, Anthropic, and Google. After an "extensive evaluation period," they landed on Google's 1.2 trillion parameter Gemini model. The custom version will handle Siri's summarization and planning functions—basically the brain work that determines how to actually understand and execute complex requests.
For context, Apple's current cloud-based AI uses 150 billion parameters. Google's model is eight times more complex. That's the gap Apple needs to close, and apparently they've decided they can't do it fast enough on their own.
Why This Actually Makes Sense
Here's the uncomfortable truth: Apple isn't a frontier AI lab. They're great at hardware, ecosystem integration, and on-device optimization. But bleeding-edge language models? That's not their game. They tried to build Apple Intelligence in-house and... well, Siri still can't reliably set a timer without misunderstanding half the time.
Someone I know at Apple (who will absolutely kill me if I use their name) said the decision came down to speed and pragmatism. They could spend another two years trying to catch up to Gemini and Claude, or they could just license the technology and ship something that actually works in spring 2026.
Bloomberg reports that Anthropic's fees were too high, which ruled out Claude. OpenAI was apparently in the running, but given Apple's existing $20 billion annual deal with Google for search, it made more financial sense to deepen that relationship.
The Technical Details
Google's Gemini will run on Apple's Private Cloud Compute infrastructure, which means no user data flows to Google. It's all processed on Apple's servers. Gemini handles the summarizer and planner components—the parts that synthesize information and figure out how to execute multi-step tasks. Some Siri features will still use Apple's in-house models, so it's a hybrid approach.
The model uses a Mixture-of-Experts architecture, meaning it has over a trillion total parameters but only activates a fraction for each query. That keeps processing costs manageable while maintaining the large compute capacity.
The Long Game
Apple isn't planning to rely on Gemini forever. They're actively developing their own 1 trillion parameter cloud model that could be ready as soon as next year. This is classic Apple playbook: lean on a partner until your in-house solution catches up, then cut them loose.
They did it with Intel chips (now using Apple Silicon). They did it with Weather Channel data (now using their own). They tried it with Maps (with... mixed results). Now they're doing it with AI.
The difference is that AI models don't stand still. By the time Apple ships their own trillion-parameter model, Google, OpenAI, and Anthropic will have moved even further ahead. It's an arms race Apple joined late.
What This Means for Users
The revamped Siri is slated for spring 2026 with iOS 26.4. You'll be able to ask it personal questions like "find the book recommendation from Mom" and it'll actually search through your messages and generate a useful response. Multi-step planning should work better. Summarization will improve.
But will it undo years of Siri being the punchline of voice assistant jokes? Bloomberg's Mark Gurman warned "there's no guarantee users will embrace it, that it will work seamlessly or that it can undo years of damage to the Siri brand." That's refreshingly honest for a tech reporter.
The Bigger Picture
This deal reveals something important about where we are in AI development. The leading models are so far ahead that even Apple—with $200 billion in cash—can't quickly replicate them. That's a level of technical moat we haven't seen in consumer tech before.
It also shows how much Apple is willing to compromise its principles to compete. They built their brand on privacy and vertical integration. Now they're running Google's AI on their infrastructure to power their signature voice assistant. That's a pretty big shift in strategy, even if they're dressing it up as temporary.
I'll be curious to see if this actually makes Siri useful. But mostly I'm fascinated by what it reveals about Apple's position in the AI race: further behind than anyone wanted to admit.