Why I wrote the BEAM book
BEAM performance, pauses, and mailboxes
- Commenters debate how a 15ms pause could be “post‑mortem worthy”; those familiar with BEAM note that typical latencies are in microseconds, so a jump to milliseconds can cause massive backlogs.
- A gen_server is described as effectively a big mutex guarding shared state; if it normally serves a request in ~20µs, a 15ms stall can queue hundreds of messages.
- Unsafe
receivepatterns that scan entire mailboxes become catastrophic under backlog, making processing time proportional to queue length. - Some systems resort to drastic recovery strategies: dropping entire mailboxes, age‑based request dropping, tuning GC around large mailboxes, and monitoring drain/accumulation rates.
Concurrency model, OTP, and failure handling
- BEAM’s main concurrency tools noted: gen_server, ETS (in‑memory tables), and persistent_term, plus newer “aliases” to avoid message buildup from timeouts.
- There’s discussion of where to block work (e.g., letting callers block rather than the gen_server) and how to apply backpressure instead of blindly queueing.
- Some argue BEAM’s magic is really OTP’s abstractions (supervision trees, processes, fail‑fast semantics), which can be emulated conceptually in other languages, though often without BEAM’s preemptive, lightweight processes.
BEAM vs other runtimes and stacks
- Historically BEAM was unique; now many problems it solves (message buses, clustering, serialization, orchestration, reliability) have a “zoo” of alternatives: Kafka/SQS/NATS, gRPC/Cap’n Proto, Kubernetes, lambdas, micro‑VMs, etc.
- Several comments emphasize that BEAM’s 2025 advantage is integration: one coherent runtime with built‑in messaging, supervision, clustering patterns, and a consistent data model, rather than wiring many disparate systems.
- Others counter that Kubernetes‑style stacks and language‑agnostic infra give more flexibility, and BEAM’s built‑ins (queues, DB, clustering) can be weaker than best‑of‑breed external tools.
Adoption, marketing, and ecosystem
- Many see Erlang/Elixir/BEAM as highly underrated but note weak corporate backing and marketing compared to Java, Go, Rust, etc.
- Some say Erlang really shines at very large scales (millions of users), and its “all‑or‑nothing OS‑like stack” plus unusual deployment model (ERTS, epmd, clustering) raises the adoption bar.
- Others argue modern “standard” languages now handle large concurrency loads on single machines, reducing the perceived need for BEAM.
Experiences with Erlang/Elixir
- Multiple comments praise BEAM as “alien tech” for fault‑tolerant, concurrent, networked systems; Elixir especially is highlighted for web apps (Phoenix, LiveView) and small teams managing big problems.
- Some report painful setup and environment drift (especially with LiveView) and prefer containers to stabilize runtime expectations. Others find Elixir deployment straightforward in recent years.
- Erlang is credited with improving developers’ thinking about immutability, pattern matching, and concurrency, but can make other languages feel clumsy afterward.
The BEAM book and technical publishing
- The book is open‑sourced on GitHub, with paid Kindle/print versions for those wanting to support the author. Readers welcome deep, implementation‑level documentation, saying official docs are too shallow.
- Several note that writing a book is a powerful way to truly understand a complex system.
- Broader discussion covers traditional publishers vs self‑publishing: publishers bring marketing, editing, and print logistics but push for broad, beginner‑friendly topics; niche, deeply technical works increasingly succeed via self‑publishing, LeanPub, and similar platforms.