AI-designed chips are so weird that 'humans cannot understand them'
Prior work in evolutionary hardware
- Many commenters note this is not conceptually new: evolutionary algorithms have designed antennas, analog circuits, and FPGA configs since the 1990s–2000s.
- Classic examples: NASA’s evolved antenna and Adrian Thompson’s FPGA circuit that exploited subtle chip physics and only worked on the specific device (and even specific environmental conditions).
- These systems often leverage unintended effects (capacitance, EMI, sub‑threshold behavior), producing designs that look “alien” and can’t be understood in traditional schematic terms.
What’s actually novel in the Nature work
- The linked paper uses deep learning as a fast surrogate model inside an evolutionary/optimization loop to predict chip performance, because full physical simulation is too slow.
- The key step: deep learning-guided global search produces efficient wireless/mmWave structures that humans likely wouldn’t propose, and does so without a human in the loop for each candidate.
- Some fabricated results are shown; performance is better in some metrics, but modeling is still imperfect and not all targets are hit.
Debate over the term “AI”
- Long argument over whether this is “AI” or just “an optimizer.”
- Some see optimization/genetic algorithms as core classical AI; others say AI should refer to the trained function/agent, not the training algorithm.
- Several invoke the “AI effect”: once a technique works and is understood, people stop calling it AI. Others blame marketing for diluting the term.
Robustness, overfitting, and testability
- Major concern: designs that exploit specific chip quirks or local environment (power noise, EM from nearby devices) may fail in production, new process nodes, or different conditions.
- Existing evolutionary hardware work shows extreme overfitting: circuits tied to one chip, batch, temperature, or even a specific lamp on the same power line.
- Suggested mitigations: use simulators with variability models; evolve across multiple diverse chips/locations; inject noise and constraints; prioritize robustness in the objective.
- Testing is hard: you can’t exhaustively check all inputs or physical states, especially for low-level hardware. Some suggest correctness-preserving transformations or machine-checkable proofs, but note this remains largely unsolved.
Biology, modularity, and engineering practice
- Several compare these designs to biology: messy cross-layer hacks that work extremely well but defy neat human abstractions.
- Human engineering emphasizes modularity, clear interfaces, and understandability; global optimization (by AI or otherwise) can outperform this but at the cost of interpretability and easy modification.
- There’s skepticism of claims that humans “cannot” understand such chips; commenters argue it’s more that our current models and tools haven’t yet been applied, and that headlines exaggerate for impact.
Security and broader implications
- Concern that AI-designed circuits and code could be a security nightmare, especially if they rely on subtle, poorly understood effects (echoing Spectre/Meltdown-type surprises).
- Some extrapolate to software and other engineering domains: tools like LLM-based IDEs already push humans toward being testers/overseers of opaque autogenerated systems rather than their primary designers.