QUIC for the kernel

QUIC’s goals vs current performance

  • Benchmarks in the article show in‑kernel QUIC much slower than in‑kernel TCP/TLS; some report userspace QUIC also underperforming badly on fast links and shrinking under congestion.
  • Explanations raised: lack of offloads (TSO/GSO), extra copies, encrypted headers, immature batching, and no NIC‑level optimizations yet.
  • Several argue this is expected: TCP has ~30 years of hardware and kernel tuning; QUIC is optimized for handshake latency, mobility, and multiplexing, not raw throughput on pristine LANs.

Machine‑to‑machine vs mobile use

  • QUIC seen as less compelling for long‑lived, intra‑DC or server‑to‑server flows where TCP already performs well and is deeply optimized.
  • Others note QUIC can shine for certain M2M use cases (e.g., QUIC‑based SSH with faster shell startup, better port‑forwarding via unreliable datagrams).
  • Consensus: QUIC’s “killer app” is lossy, roaming, mobile networks (IP changes, high RTT, packet loss) rather than clean DC links.

NAT, IPv4/IPv6, and P2P

  • QUIC over UDP runs into residential NAT and firewall behaviors; many devices don’t handle P2P UDP or QUIC “smartly”.
  • Debate over NAT: some call it “the devil” for P2P and privacy; others say it’s very useful for multihoming, policy routing, and enterprise edge complexity, and remains relevant even with IPv6.
  • IPv6 doesn’t automatically fix P2P: common home routers lack good IPv6 pinholing; STUN/UDP hole‑punching nuances discussed.

Kernel vs userspace stacks and ossification

  • One camp: QUIC belongs in userspace to preserve agility and avoid ossifying a protocol whose big selling point is evolvability.
  • Counterpoint: ossification mostly comes from middleboxes; kernel code can be updated more easily than proprietary network gear, and in‑kernel QUIC is needed for performance and eventual hardware offload.
  • Some suggest a split: kernel‑side QUIC for servers, userspace stacks for clients.

Security and encryption

  • Questioning why encrypt intra‑datacenter links; replies cite proven interception of private links, lateral movement inside compromised networks, and encryption’s added integrity protection.
  • Defense‑in‑depth arguments: even same‑rack traffic may traverse untrusted or vulnerable gear; service meshes often mandate encryption on‑host.

APIs, features, and use cases

  • Discussion of how a kernel QUIC socket API should expose multi‑stream semantics; comparisons to SCTP APIs (seen as clunky) and ideas like “peeling off” streams into separate FDs.
  • Interest in unreliable datagram extension for games, voice, VPNs, and QUIC‑based SSH/Mosh‑style tools.

Kernel size and microkernel concerns

  • Some object to adding more complex protocol logic to Linux, citing millions of lines of privileged code and growing attack surface; advocate microkernels where drivers and stacks run in userspace.
  • Others respond that Linux is intentionally monolithic for performance and hardware integration; microkernel options exist but aren’t yet competitive for mainstream desktop/server loads.

HTTP/3, SNI routing, and deployment pain points

  • Encrypted SNI in QUIC/HTTP‑3 breaks existing TLS “peek and proxy” patterns (e.g., NGINX ssl_preread_server_name) used for failover and SNI‑based routing.
  • Suggested workarounds: rely on client‑side HTTP/3 fallback to HTTP/1.1/2 over TLS, use HTTPS DNS records and Alt‑Svc, or implement specialized QUIC‑aware routing that decrypts the initial packet (complicated further by ECH).

Adoption and outlook

  • Some perceive QUIC as obscure; others note it’s already widely used (e.g., HTTP/3 in browsers) and compare its trajectory to IPv6: slow but steadily increasing share of traffic.
  • Overall sentiment: QUIC clearly improves UX in hostile/mobile networks and simplifies higher‑level protocols, but its performance, kernel integration, and operational story are still evolving, especially for datacenter and M2M scenarios.