We rewrote our Rust WASM parser in TypeScript and it got faster

Algorithm vs. language in performance

  • Many argue the main win wasn’t “TypeScript vs Rust” but fixing an O(N²) streaming parser to O(N) via caching; this alone gave a major speedup.
  • Others note that removing the WASM/JS boundary also gave substantial gains, so both algorithm and architecture mattered.
  • Parallel drawn to uv vs pip: most speed comes from doing less work and better algorithms, not just “Rust is fast,” though some insist the language still adds a nontrivial “extra bit.”

WASM–JS boundary and serialization costs

  • Heavy focus on the overhead of crossing the WASM/JS boundary: serialization, copying, and object construction dominate runtime in many designs.
  • serde-wasm-bindgen is discussed as an improvement over JSON, but still limited by FFI call counts and string handling.
  • Several suggest shared buffers (TypedArray/SharedArrayBuffer) to avoid copies, while noting this forces low-level “raw bytes” programming.
  • Consensus: interop overhead and data marshaling are real bottlenecks; they can swamp raw compute advantages.

Rust vs TypeScript / JS trade-offs

  • Some say TS/JS abstractions helped them see the real architectural problem, while others counter that high-level abstractions can hide costs.
  • Rust’s ownership model can make some optimal algorithms harder to express (e.g., mutating disjoint slices or tree structures with parent pointers).
  • For small, streaming workloads, a well-JITed JS/TS parser can be “fast enough” and simpler than Rust+WASM; for large batch workloads, Rust/WASM might still win (unclear from this thread).

Benchmarking and measurement issues

  • Critique of timing methodology: per-call measurements in browsers are noisy due to timing-attack defenses; recommend timing large batches instead.
  • Some readers find the blog’s final summary table confusing or inconsistent with described baselines.

Rewrites, productivity, and anecdotes

  • Common theme: rewriting (even in the same language) lets teams fix old mistakes and bugs, often yielding big speedups independent of language.
  • Multiple stories compare Python vs C++/Go/Rust, showing that:
    • Algorithmic bugs or poor architecture can dwarf language speed.
    • Higher-level languages can make it easier to iterate, profile, and fix performance-critical code.
    • Yet for certain services, Python’s runtime overhead became a serious problem, prompting rewrites in faster languages.

LLMs, blog quality, and OpenUI’s goal

  • Several comments complain the article reads like AI-assisted “slop,” questioning clarity and correctness of benchmarks.
  • The author admits heavy LLM use to assemble internal benchmark notes into a blog, citing limited team capacity.
  • OpenUI is described as a bridge between LLMs and live UI, using a custom DSL to generate safe, consistent components instead of letting LLMs emit arbitrary code or raw JSON.