There isn't much point to HTTP/2 past the load balancer
gRPC and HTTP/2 inside infrastructures
- Several commenters note a major in-datacenter use case the article barely touches: gRPC.
- Teams have invested heavily in HTTP/2 internally to get gRPC’s multiplexed, binary, streaming RPCs, with clear performance wins over JSON/HTTP APIs.
- Others clarify that this is mostly a non-browser story; browsers don’t expose “native” gRPC over HTTP/2, so you still need specialized clients or fall back to WebSockets/other transports.
- Load balancing gRPC can be tricky: if you use only L4 balancing with long-lived connections, traffic can skew heavily to a subset of backends; proper HTTP/2-aware L7 proxies avoid this.
Do you even want a load balancer?
- One camp argues: if your framework and language are good, you shouldn’t need a reverse proxy; it adds another protocol, failure mode, and attack surface.
- The dominant response: production app servers are not hardened for direct Internet exposure (slowloris, malformed headers, DoS), and most docs assume a fronting proxy.
- Common reasons given for load balancers/reverse proxies: TLS termination, central security enforcement, static asset performance, URL rewrites, multi-service routing, graceful deploys, failover, hiding private resources, and solving DNS/TTL and multi-IP issues.
- Strong disagreement over where TLS should end: some insist on end-to-end encryption (post-Snowden), others terminate early and rely on internal network controls or VPNs.
Is HTTP/2 past the load balancer worth it?
- Article’s claim: inside the DC, low latency and long-lived connections mean HTTP/2’s multiplexing gives “little benefit,” and encryption/TLS handling adds complexity, especially in Ruby where parallelism is weak.
- Pushback:
- Header compression and fewer connections can matter at scale; one comment cites measurements where headers were a huge share of bandwidth.
- Multiplexing can mitigate ephemeral port exhaustion and reduce syscall overhead by coalescing many small responses.
- Some see large speedups even on localhost and question the lack of benchmarks supporting “no benefit.”
- Others side with the article: implementing HTTP/2 end-to-end (HPACK, flow control, stream state) is significantly more complex than HTTP/1.1, and for most typical LAN workloads the gain is marginal.
Streaming, HTTP/2 vs HTTP/3, and browser gaps
- HTTP/2’s bidirectional streams are praised for long-lived, duplex communication (especially service-to-service), but browsers don’t expose this cleanly to JS; WebSockets and now WebTransport are the de facto options.
- Some note HTTP/2 can perform poorly on lossy mobile networks due to TCP-level head-of-line blocking; HTTP/3/QUIC improves this but currently costs more CPU and relies heavily on userland stacks.
Security and correctness
- End-to-end HTTP/2 substantially reduces classic HTTP request-smuggling issues; downgrading to HTTP/1.1 at the proxy reintroduces risk.
- A few operators disable HTTP/2 on load balancers until they’re confident implementations are free of such vulnerabilities.