Apple patches decade-old iOS zero-day, possibly exploited by commercial spyware

Device support, forced upgrades, and EOL frustration

  • Multiple comments lament older iPads/iPhones being effectively “bricked” because security fixes are tied to major OS upgrades (e.g., iOS 26) rather than backported to iOS 18 / iPadOS 17.
  • Some see this as a “rug pull” breaking the informal norm of supporting the last two major versions through the next autumn.
  • Others argue users are choosing not to update and must accept the tradeoff, but many distinguish “software upgrade” from “hardware replacement” and want security patches without UX/regression risks.
  • There is support for laws requiring vendors to open-source hardware/firmware shortly after EOL to allow community security maintenance.

What “zero‑day” means and nature of this bug

  • Confusion arises over “decade‑old zero‑day”; commenters clarify it means Apple had zero days to fix it once they learned, regardless of bug age.
  • It’s emphasized this CVE is likely one stage in a complex exploit chain, not a direct passcode bypass. Several readers note it appears to require prior code execution or memory write capability.
  • Whether Lockdown Mode or newer MTE/MIE mitigations helped is asked but remains unclear in the thread.

Apple security vs alternatives (Android, GrapheneOS, Qubes, Linux phones)

  • Consensus that iOS is still relatively strong compared to mainstream Android; GrapheneOS is viewed as stronger still.
  • QubesOS is praised for compartmentalization but seen as impractical for mobiles.
  • Linux phones (e.g., Librem 5) are criticized as having weak sandboxing, permissions, and lack of verified boot; supporters counter that trusted apps and reinstallability can compensate somewhat.
  • Discussion touches on Apple’s move toward memory-safe code: Swift use, a bounds-safe C dialect, and large-scale deployment of Arm MTE/MIE, though some argue closed implementations limit independent verification.

State spyware ecosystem and ethics of exploits

  • Commenters note commercial spyware has “democratized” nation-state capabilities; mid-tier actors with budgets can now buy chains like those used by NSO.
  • Many argue that determined adversaries will always find chains; decade-old bugs show that “you’re not interesting enough” isn’t a strong comfort.
  • There is a heated ethical debate over working for governments/forensic vendors: some see it as contributing to repression and killings; others frame it as a legitimate, lawful occupation.
  • Proposals include Apple offering very high payouts to outbid offensive buyers, or even formal lawful-access processes to undercut the exploit market—countered by strong objections that this would amount to a backdoor and destroy Apple’s privacy claims.

Detection, forensics, and network control limits

  • Several posts argue detection and forensics are Apple’s weakest area: once a device is compromised, users and orgs lack tooling to understand what happened.
  • A long subthread debates one user’s repeated “breach” claims on iOS devices; others remain unconvinced that unexplained traffic equals compromise, highlighting the difficulty of reliable attribution.
  • Organizations’ ability to secure mobile devices is seen as fundamentally constrained if OS vendors can bypass VPNs or hide system traffic; full safety is regarded as unattainable.
  • One suggestion: when patching such bugs, leave a non-exploitable “honeypot” and explicitly alert users if someone tries to hit it, especially for high-risk users like journalists.