Show HN: One Human + One Agent = One Browser From Scratch in 20K LOC

Project constraints and implementation

  • Built in ~3 days with ~20K lines of Rust: ~14K for engine + X11, ~6K for Windows/macOS glue.
  • No third‑party Rust crates; uses system libraries for graphics, fonts, etc., resulting in dozens of dynamic deps on Linux.
  • Cross‑platform support (X11, Windows, macOS) with minimal binary sizes (~1 MB; can be shrunk further with different build flags).

Tooling, models, and cost

  • Implemented via a command‑line coding agent (“Codex”) using gpt‑5.2 with “xhigh” reasoning.
  • Work ran under a flat‑rate ChatGPT‑style subscription; author estimates ~€19 marginal cost for the 3‑day effort and says they’d never do this pay‑per‑token.
  • For local experiments they run a 120B open‑source model via vLLM on a single 96GB RTX Pro 6000, no tensor parallelism.

Human‑in‑the‑loop vs agent swarms

  • Central theme: one skilled human + one good agent outperformed a “many agents, minimal humans” experiment (Cursor FastRender) in LOC, deps, and readability.
  • Several argue agents are amplifiers, not replacements: expertise and “good taste” in architecture remain crucial to avoid spaghetti and long iteration cycles.
  • Skepticism that swarms of autonomous agents are useful beyond narrow, easily decomposed tasks; demand for concrete success stories remains unmet.

Capabilities, limitations, and scope

  • Browser can render non‑JS sites like personal blogs and Wikipedia “shockingly well” given 72 hours, but layout is often chaotic and crash‑prone.
  • No default stylesheet; links may be inconsistently styled; features like back button may be flaky on some platforms.
  • Widely agreed this is a basic renderer, not a production browser. Parsing/painting are seen as the “easy” parts compared to full web compatibility.

Code quality, tests, and reuse concerns

  • Code is praised as compact, readable, and with far fewer dependencies than Cursor’s multi‑million‑LOC effort.
  • Specs and Web Platform Tests were placed in the repo but the agent apparently never used them, relying instead on its internal training.
  • Some worry code could inadvertently mirror existing open‑source browsers; detecting such regurgitation is acknowledged as an open, hard problem.

Security, accessibility, and future directions

  • Security was essentially ignored; author expects many severe issues and advises sandboxing. Rust only guards against some memory bugs; URL/file access likely unsafe.
  • Accessibility (AT‑SPI/UIA/NSAccessibility) under the no‑Rust‑deps rule is seen as non‑trivial and would likely require C DBus or toolkits like GTK/Qt.
  • Suggestions for better workflows include layered tests (DOM topology, layout geometry, pixel tests) and invariants (e.g., response to resizing).
  • Broader reflections touch on AI displacing some web dev work, but consensus is that complex, maintainable systems still need human engineering judgment.