Static Allocation with Zig
Static Allocation as “Old but New”
- Many note that “allocate everything at init, no heap afterward” is decades-old practice in embedded, early home computing, some DBs, and game engines.
- Others argue it’s still underused in mainstream backend/web work, so repackaging it (e.g. as a style guide) is useful, not hype.
- Several point out TigerStyle explicitly builds on prior work like NASA’s safety rules, not as something novel but as a disciplined application.
Motivations and Claimed Benefits
- Determinism: avoiding runtime allocation improves latency predictability and makes worst‑case behavior easier to reason about.
- Safety: in Zig without a borrow checker, banning post‑init allocation is used as a strategy to avoid use‑after‑free and scattered resource management.
- Simpler reasoning: centralized initialization and fixed limits encourage explicit thinking about resource bounds (connections, buffers, per-request memory) and reduce “soup of pointers.”
- Design forcing function: static allocation pushes you to define application‑level limits and batch patterns (regions/pools), similar to Apache/Nginx memory pools.
Critiques and Tradeoffs
- Static reservation can hoard memory and starve other processes, especially on multi‑tenant systems; dynamic allocation plus good design is often “good enough.”
- With OS overcommit, large “static” reservations don’t guarantee you won’t OOM later, and touching all pages at startup just shifts when failure happens.
- You still need internal allocators (pools, free-lists), so “no allocation” really means “no OS-level allocation after init,” not that memory management disappears.
- Fragmentation and exhaustion of fixed pools can be hard to debug (e.g. comparisons to LwIP), and you can still have logical use‑after‑free via index reuse.
OS, Databases, and Context
- Discussion connects static DB buffers to Linux overcommit and OOM behavior; some see historical DB tuning as a driver for overcommit.
- For a file/block‑backed database, static limits govern in‑memory concurrency rather than total data size, which many see as a good fit.
- For an in‑memory KV store, commenters stress this implies a hard upper bound on stored pairs and paying allocation cost upfront.
Broader Reflections
- Some see static allocation as aligning with safety‑critical and game/embedded practice; others note most modern apps favor GC and dynamic allocation for ease.
- There’s debate over theoretical implications (Turing completeness, reasoning about programs), but consensus that real machines are finite anyway.
- Several highlight the broader issue of how old techniques get lost and must be “re‑marketed” to new generations.