Intel's make-or-break 18A process node debuts for data center with 288-core Xeon

Homelab dreams, used server gear, and RAM prices

  • Many readers fantasize about running such CPUs in Proxmox/homelabs; most see it as something to buy used on eBay years later.
  • Used EPYC systems and odd SKUs (e.g., low-priced cloud parts) used to offer “ridiculous” value; several note prices and especially RAM costs have risen sharply.
  • DDR4/DDR5 price increases are seen as the current bottleneck. Some even talk about RAM/SSD “speculation.”
  • Power, noise, and non‑standard server parts are mentioned as constraints for homelabs, though full decommissioned systems (PSUs included) still offer value.

E-cores, no hyperthreading, and workload fit

  • The 288-core Xeon 6 uses only E‑cores, without hyperthreading; posters debate if this is competitive.
  • Arguments for E‑cores:
    • More real cores per die and better perf/watt for highly parallel workloads (virtualized RAN, build farms, some HPC).
    • Avoids hyperthreading side‑channel issues and gives more predictable per‑vCPU performance for clouds.
  • Arguments against:
    • Weaker single‑thread performance and no AVX‑512; bad fit for some HPC, scientific, or SIMD-heavy workloads.
    • Some see Intel’s E‑core strategy as having “killed” ubiquitous AVX‑512.
  • Several note that many real workloads see minimal benefit from hyperthreading and want “real cores + high frequency + memory bandwidth.”

Cloud vs on‑prem economics

  • One large subthread uses this core density to argue for moving “fixed” workloads off public cloud:
    • Compare 3‑year cloud reserved instances vs 7‑year amortized servers.
    • Non‑elastic infra (ERP, HR, AD, dev/test, DBs) often cheaper on-prem/colo, assuming you avoid cloud egress traps.
  • Counterpoints:
    • Need to include costs for power, cooling, space, redundant connectivity, backup site, compliance, support contracts, and 24/7 staffing.
    • Talent to design, operate, and secure on‑prem infra is scarce and expensive; many orgs mis‑hire or can’t evaluate infra engineers.
    • You still need skilled people to run AWS; complexity is not eliminated, just shifted.

Scaling software to hundreds of cores

  • Some worry the “cluster-on-a-package” topology (chiplets, many cores, NUMA) makes OS and runtime scheduling the new bottleneck.
  • Linux can technically handle thousands of threads, but:
    • NUMA placement and memory bandwidth become critical; several report big wins manually pinning workloads to NUMA zones.
    • Kernel subsystems (e.g., networking) and shared caches can become contention points.
  • Others think fundamentals are sound; main bottlenecks remain memory/I/O, not the scheduler, but acknowledge that poorly written software may not scale linearly.

Packaging, process node, and foundry angle

  • Several emphasize the packaging as the real story: 12 compute tiles on 18A stacked on Intel 3 base dies and Intel 7 I/O tiles, with Foveros Direct 3D interconnect.
  • Chiplet sizing (24 cores per tile) is seen as a yield strategy for a new node.
  • Strong CXL support is noted; some think the real play is becoming a CXL memory/compute hub rather than just a CPU.
  • Debate over Intel Foundry Services:
    • Skeptics question trusting Intel as a long‑term foundry partner.
    • Others argue contracts and current TSMC capacity constraints may push customers to Intel anyway.

Competitiveness vs AMD and ARM

  • Some claim Intel is far behind AMD/TSMC in perf/watt and is just “throwing cores” at the problem; others argue Darkmont E‑cores are roughly in the same class as modern ARM Neoverse for many non‑AVX workloads.
  • Unclear overall competitiveness: commenters ask for benchmarks vs AMD’s high‑core EPYC and newer ARM server chips; several expect sites like Phoronix to clarify this.
  • Skepticism remains about this being Intel’s “make-or-break” moment, with some dismissing such framing as repeated hype.