Data centers in space makes no sense

Technical feasibility: power and cooling

  • Many argue space is the worst place for high‑density compute: no convection, only radiation, so you need enormous, heavy radiators. Comparisons to the ISS show ~70–120 kW rejected with thousands of kg of radiators and large surface area.
  • Back‑of‑envelope math in the thread: a single modern AI rack (100–500 kW) would need tens of m² of high‑temperature radiator; a MW‑scale “satellite data center” needs radiator and solar panel areas on the order of football fields.
  • Supporters say radiative cooling scales with T⁴ (Stefan–Boltzmann), better coatings (e.g. graphene) and heat pumps could help, and launch cost drops could make the mass tolerable. Critics reply that this is still orders of magnitude worse than air/water cooling on Earth.

Architecture, latency, and workloads

  • For AI training, latency between GPUs must be in the ns–µs range; that implies one tightly coupled cluster, not thousands of small satellites. A giant orbital cluster would be extremely large, heavy, and fragile.
  • For inference, workloads are embarrassingly parallel and could be sharded across many small sats, with low‑bandwidth text I/O. Several think this is the only semi‑plausible use case, but it doesn’t solve the real bottleneck (training clusters on Earth).

Economics and scale

  • Numbers floated: Musk has talked about up to 1M satellites, several million tonnes of hardware, and tens of thousands of Starship launches over a decade or more.
  • Multiple commenters run cost‑per‑kW and cost‑per‑kg comparisons: even with optimistic Starship pricing, space solar + cooling comes out far more expensive than ground data centers with overbuilt solar, wind, nuclear, or hydro.
  • Any breakthrough (superconducting compute, ultra‑light solar, droplet radiators) would also make Earth data centers cheaper, undercutting the space advantage.

Reliability, maintenance, and radiation

  • High‑end GPUs are failure‑prone even on Earth; in orbit they’d face radiation‑induced bit flips and long‑term lattice damage. Proper shielding is heavy; rad‑hard chips are slow and expensive.
  • Replacing failed hardware on thousands of satellites is essentially impossible; the model becomes “use hard for a few years then deorbit.” Critics see enormous waste and no secondary market.

Security, regulation, and warfare

  • Some speculate space data centers are attractive mainly as a way to escape terrestrial regulation (data residency, copyright, CSAM, environmental and siting rules). Others note jurisdiction still follows the launching state, and ground staff remain vulnerable.
  • Militarily, satellites are described as fragile, easy targets for ASAT weapons or debris clouds; space offers little real protection compared to hardened underground or remote terrestrial sites.

Motives and interpretations

  • Strong undercurrent that “space data centers” are narrative cover for financial engineering: rolling a money‑losing AI venture (and possibly social media) into a profitable launch business before a SpaceX IPO, creating internal demand for Starship launches, and sustaining AI hype.
  • A minority steelman argues this could be long‑term infrastructure for space industry or species‑level resilience, but most see it as speculative at best and physically/economically untenable for the coming decades.