Tim Cook rallying Apple employees around AI efforts

Apple’s AI Strategy and Timing

  • Many see Apple as late and “fumbling the bag” on LLMs, distracted by AR/ Vision Pro and car projects.
  • Others argue no one except Nvidia is clearly making serious AI profits yet, so waiting for commoditization may be rational.
  • Some think Apple is “skating to a different spot,” but critics say the pep talk lacked a clear AI strategy and felt like internal PR.
  • Regulatory risk (DOJ, “default assistant” rules) is raised: if iOS must allow third‑party assistants deeply, OpenAI/Gemini could entrench.

Mac Hardware, GPUs, and Local AI

  • One camp laments Apple’s abandonment of Nvidia, OpenCL, and rack deployments, calling it a missed AI/HPC opportunity.
  • Defenders highlight Apple Silicon’s unified memory and bandwidth as excellent for on‑device LLM inference (e.g., 40B models on a laptop).
  • Disagreement over whether Nvidia’s ARM chips “beat” Apple’s; consensus that Apple’s choices limit hyperscaler adoption.
  • Some call Macs a “dead business”; others counter with Mac revenue/profit figures and argue AI itself isn’t yet a bigger business.

Cook, “Coolness,” and Leadership

  • Cook is often characterized as an operations/finance CEO who optimizes existing lines (iPhone, services, AirPods, M‑series) rather than inventing new, “cool” categories.
  • Counterpoint: those products, plus in‑house silicon and upcoming modem, are cited as huge long‑term strategic wins and “insanely cool” to many.
  • Debate over whether Apple has lost cultural “cool” vs. continued dominance with younger users; “cool” is framed as subjective but strategically relevant.

Developers, App Store, and On‑Device Models

  • Developers want a strong Apple model behind Apple Intelligence and shared inference quotas so they can sell agents cheaply without eating GPU bills.
  • Current on‑device Foundation Models are seen as too small for many use cases; some say that makes the web a better place to build agents.
  • App Store fees and lack of usage‑based billing are seen as long‑term drags on third‑party AI app quality.

AI Assistants, Siri, and Technical Limits

  • Users complain Siri still fails simple tasks (unit conversions, alarms), dictation and predictive text are poor, and accessibility features (e.g., LLM image descriptions) trail Android.
  • Others stress that a reliable, general‑purpose multi‑tool assistant is a frontier problem still unsolved by anyone, including Google/OpenAI.
  • Some note partial workarounds (Action Button to invoke GPT apps) but argue friction and lock‑in to Google search revenue keep Siri bad by design.

Existing ML, Safety, and Future Directions

  • Commenters list Apple’s non‑LLM ML successes: crash detection, heart monitoring, camera pipeline, photo search, local speech models, OCR.
  • Debate over whether “AI = LLM” and whether Apple is actually behind AI broadly or just in LLM chatbots.
  • Ethical concerns surface: one person avoids Apple to “slow AI,” others argue risks are more about misuse (job cuts, spam, surveillance) and need regulation, not product boycotts.
  • Some predict always‑on, glasses‑style assistants with on‑device processing as Apple’s likely long game, possibly via Private Compute Cloud plus stronger local models.