Why is the Rust compiler so slow?

Deployment strategy & Docker

  • Many commenters argue the article’s pain is largely self‑inflicted: rebuilding inside Docker from scratch and wiping caches on each change is what’s slow, not Rust per se.
  • Suggested alternatives:
    • Build locally with incremental compilation, then copy the static binary into a minimal runtime image.
    • Use CI to build the image; don’t rebuild containers on every local edit.
    • Use bind mounts or devcontainers to share target/ between host and container.
  • Some push back that containers are about reproducibility and matching production, even for personal projects, but others call this “over‑modernizing” a trivial static website.

Is the Rust compiler actually slow?

  • C++ developers report Rust feels comparable or faster than large C++/Scala builds; others say even medium Rust projects (or cargo install) are noticeably slower than C or Fortran.
  • Several note that memory use during Rust builds can be high, but others cite C/C++ builds using tens of GB as well.
  • A recurring view: for small to medium codebases with incremental builds, Rust is “fast enough”; pain shows up on large, heavily generic, macro‑heavy projects.

Technical causes of slow builds

  • Thread cites a well‑known breakdown of design choices that trade compile time for safety/runtime performance:
    • Monomorphization of generics, pervasive value types, and “zero‑cost” abstractions that generate lots of specialized code.
    • Heavy use of macros and proc‑macros that expand into large amounts of code and constrain parallelism.
    • LLVM backend and aggressive optimization on large IR.
    • Separate compilation by crate, with Cargo and rustc lacking a fully unified global view.
    • Trait coherence rules and tests colocated with code increasing work.
  • Borrow checking and type checking are repeatedly said to be a small fraction of total time; codegen and linking dominate.
  • Async, complex const‑eval, and deep/nested types are mentioned as pathological cases.

Comparisons, alternatives & ecosystem attitudes

  • Go, D, Zig, OCaml, Java, C/unity builds, and JITed languages are used as counterpoints to show that much faster compilation is possible with different design tradeoffs.
  • Zig’s custom non‑LLVM backend and whole‑program model are cited as an existence proof that systems languages can have near‑instant rebuilds, though at different safety/features tradeoffs.
  • Some criticize the Rust ecosystem for overusing generics and macros and not prioritizing compile‑time costs; others emphasize runtime performance and safety are the primary goals, with ongoing work on Cranelift backends, incremental compilation, caching, and hot‑reloading tools.