YC is wrong about LLMs for chip design

Interpretation of YC’s request

  • Several commenters think the article misreads YC: the “5–100x” claim is about ASIC speed/efficiency vs CPUs for specific algorithms, not about LLMs designing chips 100x better than humans.
  • Others say YC’s RFS is vaguely worded and blends “LLMs for EDA” with “purpose-built accelerators,” creating confusion.

Feasibility of LLMs in chip design

  • Strong skepticism that current LLMs can design high‑performance ASICs or sophisticated Verilog/SystemVerilog; output is seen as “mediocre” and error‑prone.
  • Some argue that even if LLMs improve, the real bottlenecks are verification, tapeout cost, and integration, not typing HDL.
  • Others counter that assuming future LLMs won’t gain reasoning and math ability is premature.

High-Level Synthesis (HLS) and existing EDA flows

  • HLS tools are decades old; widely viewed as useful for prototyping and FPGAs but often produce inferior results vs hand‑tuned RTL, especially in performance‑critical designs.
  • Practitioners note HLS is rarely used for major ASIC blocks; when it is, quality and standards compliance can be problematic.
  • Some researchers and engineers see active progress in open HLS tooling and academic/industry collaborations.

Data scarcity and proprietary IP

  • A recurring theme: unlike software, there is very little high‑quality, public HDL and EDA workflow data.
  • Most real designs are proprietary “IP”; large companies (e.g., GPU vendors) can train internal models like ChipNeMo, but startups lack such corpora.
  • Suggestions include synthetic data, custom simulators, or expert-authored datasets, but many doubt they’ll match real-world diversity.

Promising roles for AI in hardware

  • Many see near‑term value in:
    • Copilot‑style assistance for boilerplate RTL/HLS, scripts (Tcl/Python), and tool flows.
    • Documentation, Q&A, refactoring, and navigating complex manuals/specs.
    • Verification support, test generation, log/waveform analysis, and bug triage.
  • Consensus: AI as a productivity aid with human oversight is plausible; fully automatic chip design is not.

Economics, hype, and VC logic

  • Some view YC’s push as “spray and pray” AI hype; others say with a 10+ year horizon, betting on exponential AI progress is rational.
  • A key economic argument: even imperfect auto‑design that’s much cheaper could unlock many small, currently uneconomic ASIC/FPGA niches.
  • Broader debate surfaces about overapplying LLMs to domains with little digital training data, and about the general AI bubble vs real, enduring gains.