PCIe 8.0 announced by the PCI-Sig will double throughput again
Shifting system architecture (GPU-as-motherboard, backplanes)
- Several comments speculate about inverting the PC: GPU board as the “motherboard,” CPU+RAM as plug‑in cards, or everything as cards on a dumb backplane.
- Perceived benefits: simpler power delivery, better density, more freedom to mix CPU/RAM/GPU modules, potentially on‑package RAM like Apple Silicon but still upgradeable.
- Skepticism: ecosystem and compatibility would be hard; upgrades could require “replacing the motherboard” just to change a GPU; multi‑GPU servers don’t map cleanly to “CPU card into GPU.”
- High‑speed backplanes are criticized for awful signal integrity; cables and retimers are increasingly used even within servers to cross boards.
Power delivery and household/datacenter wiring
- Rising TDPs (talk of 800W CPUs and 600W GPUs) trigger long side discussions about residential wiring limits in US (120V 15–20A) vs Europe/Nordics (230V 10–16A).
- People debate breaker upgrades, wire gauges, code compliance, and fire risk, especially in old houses. Cost of adding 240V circuits (especially for EVs) is noted as high.
- In data centers, per‑rack draw heading toward 1–2 MW is said to demand new PDUs, liquid cooling, and re‑architected power distribution.
- Some point out undervolting/limiting boost on CPUs/GPUs can save large amounts of power with little performance loss.
PCIe roadmapping, adoption, and consumer vs DC needs
- PCIe 8.0 work starting while 6.0 barely ships and 7.0 just finalized leads to debate on value of being “3 generations ahead.”
- Rationale given: long silicon lead times and need for interoperability justify specs staying ahead of deployments, unlike the more chaotic Ethernet ecosystem.
- Today most deployed systems (especially consumer) are effectively PCIe 4.0/5.0. PCIe 6.0 is appearing mainly in high‑end datacenter platforms (e.g., Blackwell + high‑end NICs), with some confusion over which specific systems actually negotiate Gen6.
- Many doubt consumers need >5.0: GPUs see tiny gains, and >10 GB/s NVMe already exceeds most workloads; PCIe evolution is increasingly driven by AI/datacenter, not gaming.
- Lane count is seen as a bigger constraint for desktops; solutions involve chipsets and PCIe switches, which add cost, power, and latency.
Signaling, modulation, and comparison to Ethernet
- Commenters clarify that “GHz” is ambiguous; PCIe 6/7/8 use PAM4 with GT/s and Gbaud more appropriate units.
- PCIe 7/8 lane rates are broken down (e.g., 128 GT/s ≈ 64 Gbaud PAM4), and the slightly awkward definition of “GigaTransfers” is critiqued.
- Ethernet per‑lane speeds are noted to be ahead (100–200 Gbps per lane in upcoming standards), with PCIe effectively following that ecosystem’s advances.
Real‑world benefits: gaming, storage, and bandwidth
- For gaming, higher PCIe generations mainly help when VRAM is exhausted: they shorten stutters and texture pop‑in rather than raising average FPS.
- Some argue reviewers over‑focus on averages, under‑measuring 1%/0.1% lows and visible texture failures that correlate with bus speed and VRAM limits.
- For general consumers, integrated audio/NICs and modest storage mean most don’t hit lane/bandwidth limits; multi‑GPU/LLM users are seen as niche and better served by server‑class hardware.
Modularity dreams vs physical constraints
- There’s enthusiasm for GPU sockets and dedicated GPU RAM slots, but experts note HBM’s enormous pin counts and GDDR’s extreme speeds make socketing impractical.
- Older bus/backplane ideas (S‑100, VME, µTCA, VPX) are referenced as analogues, but commenters stress that at PCIe 6/7/8 speeds, connectors and trace lengths are severe design bottlenecks.