iPhone and MacBook on a minimalist desk

Bloomberg's Mark Gurman reported this week that the "really useful" version of Siri won't arrive until spring 2026—over a year later than originally promised. Apple's been teasing major AI upgrades since WWDC 2024, but the features that actually matter keep getting pushed back. And honestly? That might be the smartest thing Apple's doing in AI right now.

What Actually Got Delayed

The advanced Siri overhaul—codenamed "LLM Siri"—was supposed to ship in iOS 25 this fall. Now it's pushed to iOS 25.4, which historically releases in March or April. That means spring 2026 before iPhone users get the AI assistant Apple's been promising.

The delayed features are the ones that sound genuinely useful: on-screen awareness (Siri understands what you're looking at), cross-app actions (seamlessly handling tasks across multiple apps), and contextual memory (remembering previous conversations and user preferences).

What's shipping on time is the incremental stuff: Type to Siri, ChatGPT integration as a fallback option, and some writing tools. Those are nice but not transformative. The gap between "here's some AI features" and "your phone is genuinely smarter" is huge, and Apple keeps pushing that latter date further out.

The Gemini Complication

Here's where it gets messy: Apple is supposedly paying Google $1 billion to integrate Gemini into Siri for the features that require advanced reasoning. That deal was reported back in November, with custom Gemini models running on Apple's Private Cloud Compute infrastructure.

But now those Gemini-powered features are also delayed to spring 2026. Which suggests either: (1) Google isn't ready to deliver what Apple needs, (2) Apple's integration work is taking longer than expected, or (3) both companies are having trouble making this partnership actually work.

The technical challenge is real. Apple wants AI that runs with their privacy standards—on-device when possible, Private Cloud Compute when necessary, but never sending raw user data to Google. Building that infrastructure while maintaining Gemini's performance is not trivial.

Why Apple Is Moving So Slowly

Tim Cook keeps saying Apple doesn't want to ship AI that's "pretty good." They want to ship AI that works reliably for hundreds of millions of users across diverse use cases. That's a different bar than what OpenAI, Google, or Anthropic face with early adopter user bases.

When ChatGPT hallucinates or misunderstands a query, users shrug it off because they know it's experimental. When Siri fails, people get angry because they expect their phone to work. Apple can't afford to ship AI that's impressively capable 90% of the time but confidently wrong the other 10%.

There's also the hardware dependency. Apple Intelligence requires iPhone 16 Pro or newer, iPad with M1 or later, or Mac with Apple Silicon. That's a small fraction of Apple's install base right now. Shipping features that only work on the newest, most expensive devices is a tough sell when Android competitors are rolling out AI features to mid-range phones.

The On-Device Constraint

Apple's entire AI strategy revolves around on-device processing for privacy reasons. That's admirable but limiting. On-device models are smaller and less capable than cloud-based frontier models. You can't run GPT-5-scale models on a phone chip.

Apple's approach is to use small, efficient models locally when possible and fall back to Private Cloud Compute (running larger models on Apple's servers) for complex queries. That hybrid architecture is smart in theory but complex in practice.

Every query requires routing logic to decide: is this simple enough for on-device processing, or does it need the cloud? Then there's latency management, ensuring cloud responses are fast enough to feel instant. And maintaining consistency between on-device and cloud-based Siri so the experience doesn't feel janky.

What Competitors Are Doing

Google shipped Gemini on Pixel phones months ago. Samsung has Galaxy AI integrated across its flagship devices. Even smaller Android manufacturers are adding AI features through partnerships with OpenAI or local AI labs.

Microsoft is embedding Copilot everywhere—Windows, Office, Edge. They're not waiting for perfection; they're shipping iteratively and improving based on real-world usage.

Apple's approach is the opposite: build it internally, test extensively, only ship when it meets their quality bar. That's the Apple way, but it means they're perpetually behind on features that competitors already offer.

The Risk of Being Too Cautious

There's a real possibility that by the time Apple ships "really useful" Siri in spring 2026, the AI landscape will have moved on. GPT-6 might be out. Google's Gemini will be multiple generations ahead. User expectations for what AI assistants can do will have risen dramatically.

Apple risks becoming the company that ships AI features a year or two after everyone else has moved past them. That's not where Apple wants to be—they've always prided themselves on not being first but being best. But "best" requires actually shipping.

The longer Apple waits, the more they're betting that their integration and privacy approach will differentiate enough to overcome the feature gap. That's a risky bet when users are getting used to more capable AI assistants on other platforms.

The $1 Billion Question

Paying Google $1 billion for Gemini integration makes sense if Apple can't build competitive AI in-house fast enough. But it also signals that Apple's own AI models aren't where they need to be.

Apple has world-class ML researchers and huge resources. The fact that they're licensing Google's technology rather than relying on internal models suggests they've concluded they can't catch up to frontier labs on model quality in the near term.

That's probably the right call. Building GPT or Gemini-level models requires expertise, compute, and data that Apple doesn't have advantages in. Better to partner with Google (and OpenAI, as they already do for ChatGPT fallback) than pretend they can match frontier labs internally.

What Actually Ships This Year

iOS 25 will get Type to Siri, some writing tools, and ChatGPT integration as a fallback for queries Siri can't handle. That's underwhelming but at least functional.

The Image Playground for generating images and Genmoji for custom emoji are cute consumer features that'll get demo'd at keynotes but probably won't change how people use their phones.

What won't ship: the contextual awareness that lets Siri understand what you're looking at, the cross-app orchestration that makes it genuinely useful for complex tasks, and the memory that makes it feel like a real assistant rather than a stateless query engine.

My Take

Apple's caution is probably smart even if it's frustrating. They can't afford a Siri launch that embarrasses them the way early voice assistants did. Their brand is built on things working reliably, and AI isn't reliable yet.

But there's a cost to moving this slowly. Every quarter that passes with competitors shipping better AI features is a quarter where Apple looks behind. Especially in markets like China where local AI assistants are advancing rapidly and iPhones already struggle.

The Gemini partnership is pragmatic—admit you can't build frontier models in-house, partner with someone who can, and focus on the integration and privacy layer where Apple has genuine advantages. That's probably the right strategy even if it requires swallowing some pride.

I just wish they'd be more honest about timelines. Stop announcing features years in advance if you don't have confidence you can ship them. Under-promise and over-deliver worked for Apple for decades. In AI, they're doing the opposite.

Spring 2026 is when we'll actually know if Apple's AI strategy works. Until then, it's a lot of promises and not much delivery.