Static search trees: faster than binary search
Overall reception
- Many commenters found the write‑up unusually thorough and appreciated seeing low‑level optimization work laid out step by step.
- Some readers admitted the later SIMD and micro-optimizations were beyond their comfort zone but still valued the earlier conceptual parts.
- A few people explicitly linked this kind of work to real-world speedups that, in aggregate, save large amounts of human time.
Language choice and accessibility
- Large subthread on whether Rust was a good choice for examples.
- One side: C (or C++) is more universally readable; Python is popular for teaching; pseudocode is more language‑neutral.
- Other side: blog authors can pick any language; Rust is now common enough that developers should at least be able to read it; the code is mostly straightforward imperative logic.
- Some argued Python is not suitable for intrinsics and cache-level tuning; C/C++ or Rust are more appropriate.
- Debate over pseudocode: some want high-level pseudocode plus implementation appendix; others say pseudocode is too underspecified, especially for SIMD and cache-line details.
Rust vs other systems languages
- Rust praised for:
- Portable SIMD in the standard ecosystem.
- Tooling (cargo, build scripts) and safety model compared with C++.
- Counterpoints:
- Claims that C/C++ already have portable SIMD via libraries or compiler extensions.
- Concerns about Rust’s popularity (e.g., TIOBE rank) and long-term stability vs yet‑newer languages like Zig.
- Worries that relying on many third‑party crates is a security and legal risk.
- Discussion of how hard certain data structures (graphs, doubly‑linked lists) are in Rust’s ownership model unless using indices or reference counting.
Algorithmic and performance discussion
- Strong focus on constant‑factor speedups vs asymptotic big‑O; several note that practical wins often dwarf “better” theoretical algorithms.
- Comparisons with:
- Interpolation search (fast on uniform data, bad worst‑case behavior).
- Eytzinger layout and cache‑aware trees, prefetching, batching queries, and SIMD vectorization.
- B‑trees, buffer/fractal trees, compacting B‑tree variants, and static vs dynamic trees.
- Bitmap/bitset approaches, roaring bitmaps, rank/select structures, minimal perfect hashing.
- Ideas for further work: query partitioning and sorting, radix‑based schemes, compressed node representations, and van Emde Boas–style structures.
Use cases and practicality
- Mentioned applications: DNA/suffix‑array indexing, search engines, SQL joins, and static indexes with occasional writes via a small mutable overlay.
- Some skepticism about using such advanced material as interview questions; concern about mismatch with typical job work.
Presentation feedback
- Several complaints about graph color choices making lines hard to distinguish; author acknowledges this as a to‑do.