Things I would have told myself before building an autorouter
Algorithms vs industry work and CS education
- Several commenters reminisce about “real algorithms” and note that many jobs are CRUD/UI over black-box systems, not pathfinding/geometry.
- There’s criticism of CS curricula and degree requirements: current degrees are seen as poorly aligned with industry needs and used as a blunt gatekeeping tool. Some propose splitting CS into more focused subdegrees and treating CRUD-style development more like a trade.
Why PCB autorouting is hard
- People ask how autorouters encode rules like clearances, angles, and net-specific constraints; answers note these are usually per-net design rules, and aesthetics are largely ignored.
- Multiple comments explain why PCB autorouting is harder than VLSI: few layers, large vias that block all layers, components as big obstructions, tight placement constraints, and many hidden, application-specific rules (SI/EMC, power integrity, high‑speed design).
Attitudes toward autorouters and desired workflows
- Many experienced EEs are “never trust the autorouter” or “co‑creation, not full auto”: they want tools that route constrained subsets after they’ve finalized placement, not full-board spaghetti.
- Desired features: strong constraint systems (length matching, layer preferences, forbidden regions), prioritisation of critical nets, good handling of buses/differential pairs, and routing that respects real best practices.
- There’s nostalgia for older tools and recognition that modern KiCad has made big strides (push‑and‑shove, autocomplete, draggable buses), with some arguing it’s now close to commercial tools.
AI/ML, constraint programming, and datasets
- One line of discussion argues PCB routing is “just” an image-transformer problem given a huge, physically accurate dataset; others counter that unlike art, every track must be rule‑correct, so small defects are fatal.
- Ideas for datasets include synthetic boards routed by heuristics plus reinforcement learning, or scans/reverse‑engineered industrial PCBs. Estimates suggest tens of millions of high‑fidelity examples might be needed.
- There’s interest in combining AI with strong DRC/constraint engines or constraint programming, but concern about performance and getting stuck in local minima.
Monte Carlo, randomness, and heuristics
- The article’s skepticism about Monte Carlo is strongly contested: several argue random methods are essential for very hard problems, for approximate answers, and inside ML loops (e.g., Monte‑Carlo tree search, simulated annealing).
- Others welcome the “no randomness” stance for debuggability and predictability, warning that casual randomness can create opaque edge cases.
Data structures, graph search, and performance
- Spatial hashing vs trees: the thread debates the claim that trees are “insanely slow.” Critics note trees/octrees/k‑d trees matter when data is unevenly distributed or query regions don’t match grid cells. Everyone agrees: measure on real workloads.
- BFS/DFS/A*/Dijkstra: commenters correct simplifications in the post, discuss their relationships, and point out specialized variants (e.g., Jump Point Search, contraction hierarchies) for particular domains.
Implementation language, visualization, and hardware-as-code
- The choice of JavaScript gets mixed reactions: proponents emphasize algorithmic improvements and rapid iteration plus great visualization tooling; skeptics note that for very large designs, constant factors and cache behavior may still force a native re‑implementation or tight C/Rust cores.
- Many highlight visualization as the real superpower: JS/React, notebooks, and SCAD-style tools make it easy to see algorithm behavior and iterate.
- “Hardware/layout as code” draws interest but also skepticism: textual schematics are appealing, but layout is seen as inherently spatial and needing direct graphical manipulation, perhaps guided by CSS‑like constraint systems and smarter autorouters.