Apple weighs using Anthropic or OpenAI to power Siri
Rumors: Perplexity, Search, and Foundation Models
- Some argue rumors of Apple buying Perplexity “make no sense” because Perplexity wraps others’ models and doesn’t own a foundation model; Mistral is suggested as a more logical target.
- Others counter that Apple doesn’t “need” to own a frontier model; Perplexity’s search + QA wrapper is seen as best-in-class and potentially a Google replacement on Apple devices.
- Distinction is drawn between:
- LLM-powered Siri (assistant) vs.
- Perplexity-style AI search integrated into Spotlight or Safari.
Siri’s Current State and What Users Actually Want
- Broad consensus that Siri is bad at even simple multi-step or slightly fuzzy commands (timers, lights, fans, home automation, calling, alarms).
- Several say Siri’s core issue is not speech recognition but architecture and wiring to system functions, not just the model.
- Some note Siri quietly has pockets of “smart” behavior (room-aware lights, resolving renamed rooms), but it is inconsistent and language-dependent.
On-Device vs Cloud, Privacy, and Infrastructure
- Debate over whether moving Siri to Apple servers is a “privacy 180”; some say as long as Apple hosts, nothing “leaks,” others say it still breaks the long-touted on-device promise.
- Hardware constraints (RAM on iPhones, low-power HomePods) are cited as blockers to strong on-device models.
- Some suggest Apple could use Claude via Bedrock or open-source models and host them privately; others see funneling queries to third parties as off-brand.
Strategic Disagreement: Is Apple Late or Just Prudent?
- One camp: Apple’s slow, conservative AI strategy has damaged its reputation; they squandered two years where basic LLM-powered improvements to Siri and iOS could have shipped.
- Opposing view: Apple products remain strong without generative AI; voice assistants are “low-stakes,” and Apple is wise not to burn billions chasing frontier models.
- Some see the smart play as: let OpenAI/Anthropic spend, do revenue-sharing “default AI” deals (like Google search), then copy (“Sherlock”) once the tech and user behavior stabilize.
Desire for Better Voice and Agentic Behavior
- Many users—especially heavy voice users and those with older or younger relatives—see voice as central to how people interact with devices.
- Wishlist items include:
- Robust natural-language home control (multi-room, multi-device, compound commands).
- Reliable “do what I mean” timers/alarms and messaging (“text my wife I’ll be late” without 20 clarifications).
- System-level agents that understand iOS settings, organize apps, and coordinate across multiple apps via intents/MCP-like tooling.
Skepticism About “AI Everywhere” on Phones
- Some commenters barely use Siri and don’t want chatbots on phones at all, preferring small, targeted AI features (e.g., photo cleanup) over a grand assistant.
- Others fantasize about radically AI-centric phones (screen-aware assistants, fluid UI instead of discrete apps, always-on environmental understanding), but acknowledge hardware and OS constraints.
Apple Culture and Organizational Constraints
- Several see Apple’s secrecy, tight UX control, and privacy marketing as fundamentally at odds with stochastic, uncontrollable LLM behavior.
- There’s debate whether Apple’s pattern is “not first, but best” or whether Siri, Maps v1, and Vision Pro show that this approach can also misfire.
- Some argue Apple’s older, conservative leadership and marketing-driven launches (e.g., Apple Intelligence hype) have led to misalignment between promises and shipped reality.