Let's talk about AI and end-to-end encryption

Cryptography Techniques for Private AI

  • Discussion of homomorphic encryption (FHE) and MPC:
    • In principle, FHE can support neural network operations, but current implementations (especially gate-level binary FHE) are ~10⁶× slower than plaintext.
    • CKKS-style schemes are more practical for ML: ResNet-20 inference can be done in minutes on CPU, with hopes of ~1s on small networks using hardware acceleration.
    • Large models like LLMs remain “unreasonably slow” under FHE for the foreseeable future.
  • MPC and libraries like CrypTen can hide user inputs from the model owner, but outputs are still visible to the provider.

End-to-End Encryption vs. User Agency

  • E2EE protects data in transit but often coincides with poor or nonexistent data export features, limiting user control and portability.
  • Some see this as deliberate lock-in; others argue it’s more about lacking incentives to build good export tooling.
  • Moving accounts (e.g., device-to-device transfers) is not the same as users having raw, scriptable access to their own encrypted data.

Apple’s Private Cloud Compute and Confidential Computing

  • Many view Apple’s PCC / secure enclave approach as a pragmatic, privacy-improving step compared to standard cloud AI.
  • Others stress that PCC is still just a technical guarantee: it can reduce insider and attacker access, but does not inherently provide transparency or limit secondary use of data.
  • There is mention of Nvidia H100 and cloud GPU enclaves (Azure, possibly AWS/GCP) being used to build similar “encrypted-to-enclave” AI services.
  • Some participants argue the article overstates the need for cloud inference, noting Apple Intelligence is restricted to devices powerful enough to run models locally.

Surveillance, Policy, and “Who the AI Works For”

  • Strong concern that AI plus cloud services will enable mass, automated surveillance and “thoughtcrime” detection:
    • Existing trends: content scanning for CSAM, extremist threats, “grooming,” drugs/sex/guns, protest/union organization.
    • Worry that LLMs easily normalize slang, coded speech, and embeddings can be inverted to recover text.
  • Fears that AI-based detection systems will:
    • Have high-stakes false positives with poor human recourse.
    • Be used for censorship, political repression, or automated law enforcement.
    • Become “accountability sinks” that let institutions blame opaque models.

Cloud AI Business Models and Incentives

  • Multiple comments argue incentives, not technical limits, are central:
    • Ad- and data-driven models push providers to scan and retain user data.
    • “Free” or subsidized AI features create lock-in and recurring subscriptions.
    • Without strong regulation and transparency, AI agents are expected to serve providers, advertisers, and governments more than users.