Valorant's 128-Tick Servers (2020)
Valorant vs. Counter‑Strike Netcode
- Many commenters contrast Riot’s 128-tick servers (since 2020) with Valve’s newer CS2 “subtick” system.
- Subtick is described as theoretically more accurate but buggy in practice, with years of patches and remaining instability compared to mature 128-tick CS:GO.
- Some players report not noticing a big difference between 64-tick and subtick; others say CS2 feels “janky,” with brittle netcode, excess packet size (especially animation data), and disabled snapshot buffering.
- There is frustration that Valve moved away from 128-tick (and now prevents 128-tick community servers in CS2), while a free competitor offers 128-tick by default.
Tick Rates Across Games & Their Impact
- Comparisons are given:
- Valorant: 128 tick server, ~70–75Hz client update observed.
- CS: historically 64 (official) / 128 (Faceit, community); CS2 uses 64 with subtick interpolation.
- Fortnite ~30, Apex ~20, Overwatch ~60.
- Battlefield 4 evolved from 10Hz server updates to 120–144Hz, with huge perceived improvement.
- Other genres tolerate very low tick rates (e.g., Runescape ~1.67 tps, EVE 1 tps) and even exploit them for “tick manipulation” and rhythm-like gameplay, illustrating that suitable tick rate is genre- and mechanic-dependent.
Server Architecture & Optimization
- Discussion around Valorant’s per-match process model vs. potential multi-hosting / shared-state approaches.
- Some argue a single process hosting many matches could yield modest cache benefits but at higher crash risk and engineering/testing cost; others think gains would be minor.
- Intel Xeon Scalable migration is noted as a big win; one commenter finds the article reads partially like an Intel marketing piece.
- Broad-phase vs narrow-phase collision (coarse bounding boxes then detailed checks) is highlighted as a standard pattern underlying physics engines and ray tracing.
Latency, Routing, and Matchmaking
- Several point out that tick rate can’t fix high latency; modern matchmaking often optimizes for skill rather than ping, creating mixed-latency lobbies.
- Riot’s investment in its own backbone and dark fiber to keep latency under ~35ms is praised as a major differentiator.
- Others nostalgically recall regional servers with sub‑10ms ping as feeling far more “crisp.”
Languages and Tech Stacks for Game Servers
- Debate over whether fast-twitch FPS servers can reasonably be written in Erlang/Elixir or other GC’d languages.
- Consensus: BEAM is common for matchmaking/metadata services and fine for slower MMOs, but high‑frequency simulations still overwhelmingly favor C/C++ with arena-style allocation to avoid GC pauses.
Buy Phase, Downtime, and Player Experience
- The article’s optimization (no server-side animation during buy phase) triggers a broader argument about perceived “wasted time” in lobbies, buy phases, and end-of-match cinematics.
- Some see this as “selling less game” by stretching non-interactive time; others insist the buy phase is core strategy and “real play,” and note that tactical FPS formats historically involve significant downtime (including spectating after early death).
- There’s disagreement on whether Valorant/CS round structure plus buy phases are too long; critics cite personal time constraints and compare to games that drop players into action more quickly.
- Others argue these games target players with plenty of time and tight MMR requirements inherently lengthen queues and setup phases.
Economics of High Tick Rates
- It’s emphasized that 128-tick “works” technically, but doubles compute per player; at millions of concurrent users that cost is non-trivial.
- Some suggest hardware cost per tick declines over time, softening the penalty; others counter that operators will still prefer “one penny over two,” so economics, not just tech, drives conservative tick choices.