How oxide cuts data center power consumption in half

Apple/ARM vs x86 in the data center

  • Some wonder why Apple doesn’t sell rack-mount M‑series servers, citing strong perf/W for data centers.
  • Others argue Apple already uses Apple Silicon internally for specific privacy/security workloads but that this is a niche use case.
  • Skeptics note AMD’s latest Epyc chips are extremely efficient and hard to beat, and that Apple is unlikely to open its chips or platform to the general server market.
  • Consumer macOS is described as an unreliable server OS historically, though some issues have been fixed.

Oxide’s hardware design and power savings claims

  • Key savings are attributed to: shared high‑efficiency rectifiers feeding a DC bus bar, and larger, slower fans with less airflow restriction.
  • Some doubt “12x” fan energy reduction, but Oxide staff report that default fan speeds already overcool, so they run at low RPM.
  • Oxide confirms future generations will fit existing racks, emphasizing rack‑scale design and reuse.

Power distribution, redundancy, and failure modes

  • Debate over whether a shared DC bus and power shelf are a single point of failure vs. 70 individual PSUs.
  • Pro‑bus‑bar side: fewer, higher‑quality rectifiers with N+1 (or more) redundancy are preferable; bus bars are “dumb copper” and very reliable.
  • Critics note DC protection and rack‑wide faults, but others counter that at 48V and with proper fusing/rectifiers this is manageable.
  • Comparison to traditional dual‑PSU, dual‑PDU setups highlights that those also have systemic failure risks and capacity pitfalls.

Relation to OCP and telco DC practices

  • Commenters point out DC bus bars and centralized rectifiers have long existed in telco and Open Compute designs.
  • Distinction: OCP gear is hard to buy and integrate for ordinary enterprises; Oxide’s pitch is a turnkey, vendor‑supported rack.

Software stack, Illumos, and security

  • Oxide uses Illumos under the hypervisor and services; customers run standard VMs/containers on top.
  • Some worry about relying on a niche OS and speculative‑execution mitigations; others respond that Oxide explicitly owns and ships all patches, with a single-vendor responsibility model.

Market fit, pricing, GPUs, and homelab interest

  • Current product targets large organizations; prices and minimum scale don’t fit fast‑growing startups or homelabs.
  • Many would like a smaller or cheaper system or just Oxide’s control plane/BMC on commodity servers; Oxide says that wouldn’t meet their design goals and they lack bandwidth for loss‑leader lines.
  • Lack of GPUs is noted as a gap given AI demand; Oxide acknowledges this and plans to address it later.

Energy and climate framing

  • Some agree data center efficiency matters; others see the “data centers use X% of world power” framing as weak, arguing that overall workload value and avoiding wasteful workloads (e.g., some LLM uses) matter more.