Fabrice Bellard Releases MicroQuickJS
MicroQuickJS design and constraints
- Implements a small ES5-ish subset aimed at embedded use: no dynamic
eval, strict globals, denser arrays without holes, and limited built-ins (e.g.Date.now()only, manyStringmethods omitted). - “Stricter mode” disallows implicit globals and makes the global object non-mutable in the usual browser sense (
window.foo/globalThis.foo); globals must be declared explicitly withvar. - Arrays must be dense: writing far beyond the end (e.g.
a[10] = 2on an empty array) throws, to prevent accidental gigantic allocations; sparse structures should use plain objects. - Footprint targets are ~10KB RAM and ~100KB ROM, making it competitive with Espruino and other tiny JS engines; some note it would have been ideal for Redis scripting or similar use-cases.
Sandboxing, untrusted code, and WebAssembly
- Multiple commenters focus on MicroQuickJS as a sandbox for untrusted user code or LLM‑generated code, especially from Python and other hosts.
- Embedding a full browser engine (V8/JSC) is seen as heavy and tricky to hard‑limit memory and time; many existing bindings explicitly warn they are not secure sandboxes.
- Running MicroQuickJS compiled to WebAssembly is attractive because it stays inside the Wasm sandbox, can be invoked from many languages, and allows hard resource caps; Figma’s use of QuickJS inside Wasm for plugins is cited as precedent.
- There is debate over performance: nesting JS → QuickJS → Wasm → JS is much slower than native V8/JSC, but some argue predictability and JIT friendliness of Wasm can partially offset this for certain workloads.
Embedded and alternative JS runtimes
- People compare MicroQuickJS to Espruino, Moddable’s XS, Elk, DeviceScript, and MicroPython/CircuitPython for ESP32/RP2040‑class boards.
- Lack of
mallocand small ROM/RAM needs are seen as enabling microcontroller scripting in JS, though bindings/HALs and flashing toolchains remain the true pain points. - Some speculate about thousands of tiny interpreters (e.g. on GPUs), but current work in that direction is experimental and not clearly aligned with MicroQuickJS yet.
Lua, Redis, and language design
- One perspective: if MicroQuickJS had existed in 2010, Redis scripting might have chosen JS over Lua; Lua was picked for its tiny ANSI‑C implementation, not its syntax.
- Long sub‑thread debates Lua’s unfamiliar syntax (1‑based indexing, block keywords), versus its consistency, tail‑call optimization, and suitability for compilers/embedded scripting.
- Ideas like “language skins” (multiple syntaxes over one core semantics) are discussed as a way to reconcile familiarity with alternate designs.
Bellard’s reputation and development style
- Extensive admiration for Bellard’s breadth and depth: FFmpeg, QEMU, TinyCC, QuickJS, JSLinux, LZEXE, SDR/DVB hacks, an ASN.1 compiler, and an LLM inference engine.
- Many highlight his minimal‑dependency, single‑file C style and robust CS foundations; others note his low‑profile, non‑self‑promotional persona and lack of interviews.
- Some joke about missing commit history and “12‑minute” implementation, while others infer a private repo or proto-then-import workflow.
“Lite web” and browser bloat
- Inspired by “micro” JS, several commenters fantasize about a rebooted, lightweight web: HTML/JS/CSS subsets, Markdown‑over‑HTTP, “MicroBrowser/MicroWeb”, and progressive enhancement.
- Others argue there is no economic incentive: browsers are complex because they must run arbitrary apps compatibly; any “simple” browser fails on most sites normal users need.
- Gemini/Gopher/WAP are mentioned as historical or current attempts at simpler hypertext; opinions diverge on whether such parallel ecosystems can ever gain mainstream traction.
AI‑assisted experiments and HN norms
- A visible thread chronicles using an LLM-based coding assistant to build MicroQuickJS integrations (Python FFI, Wasm builds, playgrounds), offered as evidence of fast prototyping and sandbox viability.
- This sparks pushback about off‑topic AI evangelism, perceived self‑promotion, and “LLM slop”; others defend sharing such experiments as relevant and “hacker‑y” when they surface concrete findings (e.g., byte sizes, integration patterns, resource limits).
- There is broader meta‑discussion on when linking one’s own blog or AI outputs is helpful vs. annoying, and how LLMs change the perceived effort behind quick demos.