Most iPhone owners see little to no value in Apple Intelligence so far

Overall Sentiment

  • Many commenters find Apple Intelligence underwhelming or pointless in daily use; several have disabled it.
  • A minority see clear value in specific features (summaries, notification filtering, writing help), but almost nobody reports it as transformative.
  • Some argue that even 20–30% of users finding value at launch could be framed as “success,” but others note the survey data is too vague to interpret cleanly.

Perceived Usefulness of Features

  • Commonly liked:
    • Notification / email summaries for catching up on fast-moving threads or triaging long chains.
    • Reduce Interruptions focus and other ML-style classification (seen as “old” AI, but genuinely helpful).
    • Text proofreading / tone softening, especially for non-native writers or people who write aggressively.
    • Occasional coding assistance (mainly via other tools like Copilot/ChatGPT, not Apple’s own).
  • Commonly dismissed:
    • Image Playground, Genmoji, and photo cleanup seen as toys or “corny,” often with poor quality or obvious artifacts.
    • Message / mail categorization and summaries are frequently inaccurate or even invert meaning, destroying trust.
    • Visual intelligence is mostly just handing off to ChatGPT or Lens, which users already had via separate apps.

Siri and the “Real Assistant” Gap

  • Strong desire for an actually capable, context-aware assistant that can:
    • Orchestrate across apps (calendar, messages, email, notes, smart home).
    • Automate multi-step, real-world tasks (e.g., handling dentist visits, travel, recurring chores).
  • Many report Siri remains unreliable, slow, or obtuse even after the Apple Intelligence branding; some say it’s only good for timers.

UX, Reliability, and Performance Problems

  • Complaints about:
    • Battery drain and device freezes; some users report stability improving when Apple Intelligence is disabled.
    • Noticeable notification delays due to summarization, including in CarPlay.
    • Terrible onboarding: Image Playground fails with no clear progress indication; weird naming clashes (Playgrounds vs Swift Playgrounds).
    • Awkward UI (unreadable error toasts under the Dynamic Island, confusing controls like “Appearance”).
  • Summaries in Messages and notifications often appear without clear indication that they are summaries, causing confusion.

Accessibility and Voice Interfaces

  • Several see massive potential for blind or low-vision users if voice control and on-device understanding become robust.
  • Current reality is described as “rage-inducing”: touch-only UIs removed autonomy, assistants are flaky, and AI promises aren’t delivered.
  • Debate over whether LLMs are actually necessary versus more traditional, deterministic voice interfaces.

AI Hype, Comparisons, and Business Pressure

  • Many compare Apple unfavorably to GPT‑4o, Gemini, and other cloud models; Apple’s tiny on-device models are seen as especially weak for summarization.
  • Some defend Apple’s slower, more private approach and note prior ML wins (Photos search, autocorrect), arguing “AI” has been there for years without the label.
  • Broader discussion about AI hype: companies pushing Copilot and similar tools for stock-market optics and “not missing the boat,” regardless of actual productivity gains.

Desired Future Direction

  • Commenters want:
    • Deep, invisible integration where AI quietly improves notifications, search, Shortcuts, and app-to-app workflows.
    • Less emphasis on gimmicky generative features and more on reliable, agentic OS-level assistance.
    • Clearer affordances: what AI can do, where it’s active, and strong controls to disable specific behaviors (e.g., message suggestions) without killing everything.